I've moved recently, and the only computer I've got working right now is my old Intel Classmate notebook. It's slow, but it works well for 95% of what I want to do, even if the keyboard is a little small for my giant claws. Anyway, I was running Ubuntu 10.04 Beta when the Xorg memory leak bug hit, and I used that as an excuse to try some stuff I'd been thinking about for a while.
I installed and tried Fedora 13 Beta for about a week. I got really hands on with it, and I have some pros and cons that I'll (hopefully) cover this weekend. I also tried Tinycore Linux, which some of you may never have heard of.
Tniycore is ... tiny: it's 10MB, which puts it right at the bottom of the "small Linux" distros. It's also very core. There are no apps. It boots to a minimal desktop (WM, built for Tinycore) with a small dock (Wbar), and nothing else. Oh, there's a terminal, a control panel, and an app installer (using FLTK). It feels very much more "then" than "now." Believe me, though, it boots fast. From my SD card, the desktop is fully functional in 3 seconds -- my SD card is slow.
By default, Tinycore boots into "cloud" mode, which is like a live CD, but it runs completely from memory. With only 10MB, you can understand that running in memory isn't a problem. You can also guess how blazingly fast it is. When you want to run an app, you open up the apllication installer, search and choose (many are availbale), and click "Install." The application(s) appear in your dock.
Everything continues to run completely in memory. Installing means downloading a TCZ package, which is really just an archive of the binary, along with a hash file and a dependency file. Dependencies are handled automatically. When the package is installed, the original files are deleted to make room in RAM. Starting an application is almost instantaneous, even for a big app like Firefox. Since the package format is so simple, even the newest software (like PCManFM2) is available. The simple package format also means that the application installation takes almost no time, even on my little netbook. Chrome Browser installs in about 1.5 seconds, for example.
You can also set Tinycore to run in another mode, called TCE. If you specify TCE mode on boot and give a location to save, Tinycore will save all your packages to that location so that you won't have to download them again. You can also set applications to be loaded automatically "on boot" or only when first launched, "on launch." Applications aren't permanently stored in the filesystem unless you go to real trouble to do it. They are always freshly installed, either at boot or at launch.
This completely original distribution takes a new approach to computing, and that is on demand. Imagine if my computer just PXE booted to Tinycore -- how fast would it be? With GbE Internet connections coming, and 10GbE after that, how much sense does it make to store my OS on my desktop. Network speeds eclipse most hard disk speeds, even now.
I can see putting up a server in my house for PXE booting a custom image of something like Tinycore, with apps set "on launch" on an NFS directory from the server. This is starting to sound a bit like LTSP (which is a great project I've deployed a couple of times), but everything here is local and running completely from memory. Applications will launch faster than their HD-bound cousins since the network is quicker than my HD.
Why do I say that? Let's look at what happened to me two nights ago. I installed Ubuntu 10.04 over the network, using just a kernel and initial ramdisk as a starting point. It's my favorite way to install when I have a decent connectiion. (I'll write about the experience soon). I download the base Ubuntu system (2 minutes) and installed it (10 minutes). Next, I downloaded the full desktop (6 minutes) and installed (2.5 hours!). This is on a 12Mb/s network with a 30GB netbook HD. Imagine the speed on 100Mb or even Gb Ethernet.
Where does this all end? SaaS, folks. It's coming. On demand computing will be here soon.
Friday, April 30, 2010
Saturday, April 24, 2010
My Thoughts on "Then and Now"
There's a meme appearing on Gnome Planet and Ubuntu Planet where people post their first Linux desktop and compare it to what they run now. I thought I'd put my two cents in. First, my first!
Red Hat 5.1, courtesy ToastyTech.com
It looks primitive, I agree, but let's compare it to what I was using before.
Windows 95, courtesy of AresLuna.org.
You know what? They weren't that different. Win95 had Plug'n'Play, but it worked so badkly that it shouldn't have been a feature of the OS. Memory was unprotected so the OS would hand for no reason whatsoever. Moving to RH5 seemed like a joy, once you got it set up. That was the painful part. Oh, and it supported like three pieces of consumer hardware, so I had to go out to buy a new, "real," modem. Netscape sucked, but so did IE.
Let's move forward to today, on my netbook:
There's a better theme, and the controls all moved (but that happened years ago with the move to GNOME 2). Still, there's not a lot different in the UI. It makes my point from last week -- don't change the UI. The core libraries, though are completely different. Applications can easily communicate, and there are standard libraries for things like communication, media, and document rendering. None of that was true of either RH5 or Win95.
Where's the competition?
Window 7, from blogcdn.com
Similar level of difference from Win95 to Win7 as from RH5 to Fedora 13, eh? (I thought about trying to get RHEL 5.5 and taking a screenshot just so I could say "From RH5 to RH5.5, but I didn't.") Again, there is some flash, and the internals have all been replaced (by NT!), but the basic UI isn't significantly different.
I thought we'd be further along by now. Where's my flying car? I guess I'll look to the smartphone market to see the changes I really want.
Related articles by Zemanta
Labels:
Operating system
An Open Letter to the Mozilla and Chrome Developers
Image via CrunchBase
At the close of F8 and knowing Facebook's plans for - I'm tired of creating new accounts at every website on Earth. OpenID offers a way around this using OAuth, but I need to choose and type in my OpenID provider. My XMPP provider can be used as my OpenID provider (as OneSocialWeb does) and the browser will know my identity, making it easy to connect. The website I connect to will simply become my "friend." Of course, Mozilla and Chrome need to implement private browsing and profiles so that I can have several identities or even remain anonymous if I so choose.
- Once a website is in my XMPP contact list, I can give the site atomic permission to view only the parts of my profile and activities I choose, whether the limits be by network, group, individual, or other criteria. The access to this information is securely based on XMPP's permission system, which is robust. Much like Facebook's new permission system, websites can use this limited information to customize the site experience for me and give me more information about others I know who are also on the site. I could even delegate authority, for example, to edit a Flickr photo album of a party to one or two of my friends who were also at the party.
- The site could talk to me about things that I would find interesting. It could update me both in real time and on my OneSocialWeb news page about changes, what my friends liked, or whatever. Best of all, I could choose to deny the website access to my feed and simply ignore anything from the site.
- I could see my friends status updates, including ones that were private for me only, for groups, or for the public, and wouldn't need to rely on Twitter or Identi.ca (StatusNet) for that any more. Not to be a doom-sayer, but I don't think Twitter has enough of a business model that we should be programming it into our infrastructure. It's closed, to boot.
- I could do this kind of stuff without being tied to Facebook ... or GMail ... or Twitter ... or any particular provider. I or my company could even host a server. XMPP is federated, you see, and doesn't tie anyone down. OneSocialWeb doesn't say where your data needs to be stored (or kidnapped). I would have control of of my data. No one would own it or me.
- I could ditch the multiple IM logins I now have, but which I rarely use because of the pain they cause me. With Mozilla and Chrome users on board, It would be easy to communicate with any of my friends. I wouldn't need to create yet another account on yet another IM server, download a client (or configure a multi-client) and start talking. For my friends who don't have XMPP accounts, I could just recommend changing browsers (yours are better, anyway). Best of all, there would be no walled gardens. I could invite two friends who didn't know each other -- and who were previously on different IM networks -- to join a group chat with me, introducing them to each other. How novel is that?
- XMPP supports VOIP and video chat. My friends would no longer need Skype, yet another service I needed to sign up for and maintain.
- Mozilla and Google could use OneSocialWeb to increase their brand by offering logins to their own servers by default for new XMPP users in addition to allowing a simple user@server + password login for current XMPP users.
- Google and Mozilla are trying to do this stuff anyway, but at cross purposes. Google Buzz appears to be stillborn and Google is having trouble with getting privacy permissions correct. Mozilla Contact is working on connecting social networks people already use, much like Pidgin connects IM networks (i.e., not really). Why not use open standards, technology, and source code? Use OpenId, OAuth, and XMPP.
- Finally, it exists now. Servers and clients are available. Take the reference code from OneSocialWeb and adapt it to your browsers. It's less work than doing it from scratch.
Related articles by Zemanta
- Facebook Adopts Open Standard for User Logins (webmonkey.com)
- Forget Google Buzz -- Promote OneSocialWeb (ibeentoubuntu.com)
- XAuth, OAuth, and Yahoo! OpenID (developer.yahoo.net)
Sunday, April 18, 2010
Change the back end, not the UI
Image via CrunchBase
I watched four hours of the Google Atmosphere event yesterday. Sure, a lot of it was Google preening and PR, but there were a lot of surprises. The iPhone and Blackberry were mentioned much more often than android phones. Several different OSes were used for demos, along with different browsers. MS, Zoho, Amazon, and several other Google competitors were mentioned as viable alternatives, which definitely breaks the Marketing 101 rule: "If you're the market leader, never mention your competition." (UFS, why can't you act mature?)That's not really what this blog post is about, though. I want to mention a couple of gems that were buried deep in lectures and demos. Salesforce.com's new social layer (Chatter) is a blatant rip-off of Facebook. They admit it. They even revel in the fact. Why? Everyone in the new generation knows Facebook, and everyone understands it. Training costs to start using Chatter are almost zero. Turn it on, people immediately get it, and they immediately start using it. It doesn't matter that the interface for FB sucks or that a new kind of interface would be more efficient.
That brings me to the second, related point: don't change the interface. Add functionality on the back end, but leave the interface alone. The automobile analogy was almost required. Repeat: leave the interface alone. I fear the day GNOME 3 comes out, no matter how clever and "intuitive" it is. I much prefer the work around Elementary Nautilus and integration of Zeitgeist and Tracker. In fact, my old notes for GNOME 3 were pretty much total integration of tagging into the desktop and every application, while leaving the tags to appear as directories in the file manager.
Just something to think about. Cue comments about button placement in 3, 2, 1 ....
Related articles by Zemanta
- GNOME 2.30, End of the (2.x) Line (tech.slashdot.org)
- The Future Of Nautilus (omgubuntu.co.uk)
- Seif Lotfy: GNOME Activity Journal 0.3.4 Preview (seilo.geekyogre.com)
Labels:
Elementary Nautilus,
GNOME
Subscribe to:
Posts (Atom)