Miscellaneous > The Lounge
IM BACK WITH ANOTHER CRAZY IDEA.
piratePenguin:
I posted this on another forum under the title the web vs clouds.
On the outset of the web as we know it, there are webpages, and links. There is most often a pile of server-side code behind the webpages, or a web-app but generally this code is hidden and even if it isn't there is no well defined method to download it.
Now, leaving security concerns aside for a moment, imagine you could be using a web-app, or vising a website, and you could click view source to fetch the entire source code to what you are seeing, and you can (emphasis can) run the website on your computer. Also this website has all of its data in a p2p distributed cloud. The website goes down. You don't have the website on your computer, because why would anyone do that. But it so happens that the backend code for the site is also in the p2p cloud. So you can still get on with your business by running it on your computer.
I couldn't get to wikipedia for a bit today. Imagine as above I start using it running on my own computer. I make some edits to some articles, and what do I do? I put those edits into the cloud. Then when the real wikipedia comes back it can grab them.
Imagine there was an edit button in your browser (or your cloudscape), and you can create your own version of a page (for simplicity imagine this is a static page (nothing like a wiki in any case)) by adding or removing text or such. You put your version up in the cloud, and the cloud is versioned, so people can track your changes, including the original author. There's collaboration (primitive?), and it works naturally rather than being implemented for every different web-app out there (apps can still offer specialized collaboration).
Thoughts.
davidnix71:
Why not just go one step further and have the site reside in the Cloud as an array, like a Raid 5 or 6.
This would not work well for wiki because someone has to control the edits. For a static site, the NAS raid would be
accessed like p2p. If the host was down, as long as the peers have static IP's or have a server somewhere they announce to
so they know where each other are, then one of them becomes the new temporary host. The new host would send a request to
an OpenDNS server to update the site's location on the web.
Users would have to use OpenDNS when the main host was down because the regular DNS servers probably wouldn't
allow that kind of rerouting.
Lead Head:
Thats not a bad idea actually. The only issues I can see though is bandwidth, and what happens when the site comes back online, and there is hundred of thousands of changes. What stops would there be to prevent overloading the software, but still allowing updates to be applied in a timely matter?
piratePenguin:
Hmm david yeah I was thinking about different approaches today, yours is an interesting one.
There's no challenge in setting up a version-controlled p2p cloud - there are distributed filesystems out there (I was referred to Tahoe today which is GPL and used by AllMyData.com), and then we can use VCSes like git, Hg or darcs (which I like since it's pretty good to solve conflicts automatically) ontop of this, so very little coding to get things started.
"Cloud" is the newest buzz word since web 2.0 and I read a review of pretty damn nice p2p proprietary cloud on ReadWriteWeb today (I said FUCK before I realized this was just the name of the website :) link). Everyones putting their photos and documents up in clouds but putting a web up there is new territory..
And this is the only really big question I can see: how best do we define a website, or, can we do this at all. A webpage is easily defined since it's just one document, but a website is an application and it can be simple like "does nothing" (except serve the basic document) or complicated like Gmail or a chat room (decentralised chat is an application I was trying to figure for today). Current websites will require CGI or Python or Perl configured for Apache or IIS on Windows or GNU/Linux or god knows what. The only way I can think to encapsulate every environment is using virtual machines (is this along your lines david?), but we can't expect to host every image of every websites environment in this cloud can we? I'm sure the law would work against us there too.
So it might be just reasonable to have a few standard environments where websites need to select compatible configurations in a config file in the source code. It isn't a huge deal if one computer can't provide IIS and PHP, it just means it should maybe wait or see if the website is up and running elsewhere (the protocol could probably redirect automatically). In any case, there will be a less-than-insignificant turnaround from a website going down to it being running on your computer as it was remotely, and this is god damn hackish and it will never cover all the grounds.
Since websites will need to be ported anyhow... (they will need to be ported for if their backend source is made available: something I don't expect to be popular with everyone (think google) but I guess that means there's an opening to fill that void lol) Since websites will need to be ported.. maybe we should define a standard programming language for web apps, that would do to website development what ODF is supposed to do to document formats.
This is where you get a billion hackers whining because they can't use their favorite mutt of a language, but it means this can work real well for those who bite. Standardizing the front of the web has worked pretty good, maybe it can work in the back as well. In any case, all websites will probably be made available in the cloud (and the collaboration stuff possible), just not their source code available in that well defined manner which leads to some functional loss (obviously like not being able to see the source or run the source in a standard manner)
For sake of argument or probably a prototype the programming language could be just python. The perfectionist in me would love to say Javascript, but just not yet.
If we make the backends of a whole pile of websites converge like this, we will be using our standard html and css etc on the browser end, http and that same stuff in between, and your backend language on the other (specifically not a part of the browser for security and the web was never supposed to be able to create files on your computer), and basically we have a developer toolkit where application work and data is seperated from the users machine, but also with no dependence on any server. And they will still be every bit compatible with the web we have today.
If a million people were accessing a website through the cloud, and it's source is available like above, originally I imagined, or dreamed, in a p2p fashion other computers would replicate the roll of the website and take some load off the server. Auto load balancing, this is another pretty cool thing we should aim for.
I think this post is too long with too many details and I'm tired. Oops.
--- Quote ---Thats not a bad idea actually. The only issues I can see though is bandwidth, and what happens when the site comes back online, and there is hundred of thousands of changes. What stops would there be to prevent overloading the software, but still allowing updates to be applied in a timely matter?
--- End quote ---
Basically, in the cloud there will be a repository, or a location in a repository where the website database will be found.
When a website goes down any instance that takes on its role will branch off the original database, and any instance can manage hundreds of users - it depends on the server. The changes will occur in this branch, and there may be other branches by other instances (this is where conflicts can arise, but there's a way darcs can manage this).
When a website comes back up it can find branches to its database and merge them. Originally I said that users could download the source and run the app on their computer but in reality that's dumb since a more appropriate server can do that job for 100s of users. So in this fashion there will be one or just a few branches to the database: not one for every user, that makes merging mmuch easier and quicker. How long will it take? MMMMM probably won't be as quick as the database time spent performing thousands of queries, but by how much I don't know.
Using OpenDNS is very good thinking.
piratePenguin:
Holy shitt http://code.google.com/appengine/
Navigation
[0] Message Index
[#] Next page
Go to full version