Friday, May 28, 2010

Is StatusNet Really an Appropriate Base for a Social Network?

Image representing StatusNet as depicted in Cr...Image via CrunchBase
Why is there so much talk of ditching Facebook? Privacy issues, of course. There's also more. As much as Facebook does (and it does a lot), it fails to do what people need it to. Let's look at Matt Zimmerman's recent post, Optimizing my social network. I'll quote a bit from it.
Here is the arrangement I’ve ended up with:
If you just want to hear bits and pieces about what I’m up to, you can follow me on, Twitter or FriendFeed. My and Twitter feeds have the same content, though I check @-replies on more often.
If you’re interested in the topics I write about in more detail, you can subscribe to my blog.
If you want to follow what I’m reading online, you can subscribe to my Google Reader feed.
If (and only if) we’ve worked together (i.e. we have worked cooperatively on a project, team, problem, workshop, class, etc.), then I’d like to connect with you on LinkedIn. LinkedIn also syndicates my blog and Twitter.
If you know me “in real life” and want to share your Facebook content with me, you can connect with me on Facebook. I try to limit this to a manageable number of connections, and will periodically drop connections where the content is not of much interest to me so that my feed remains useful. Don’t take it personally (see the start of this post). Virtually everything I post on my Facebook account is just syndicated from other public sources above anyway. I no longer publish any personal content to Facebook due to their bizarre policies around this.
Mentioned are, Twitter, Friendfeed, Google Reader, LinkedIn, a blog, and Facebook. Matt needs seven services to cover all his bases. Sure, many of the services are syndicated to other services, but he checks some more ( and some less (Twitter).. What happens if I subscribe to the wrong service to follow Matt? Am I relegated to being a second-class social network citizen?

I believe that this situation frustrates the average person as much or more than privacy issues. A lot of people just don't care about privacy, are ignorant of what's being shared, are willing to make the trade-off because of a lack of alternatives, or just don't feel locked in.  You can see the feature creep in Facebook, now many people's main e-mail client, as Facebook tries to be all things to all people.

A week ago, Shashi write a wiki page called "Why build on StatusNet?Evan Prodromou responded to the article with this dent: "GNUsocial, Diaspora, et. al.: use StatusNet to build your distributed social network. It'd be dumb to start over from scratch." While I agree with the second half of the statement (starting from scratch would certainly be dumb), is it fair to ask a developer to build on SocialNet?

I'm no crack developer, but I'm going to attempt to answer this question by Looking at as many of the Facebook features as possible and comparing them to current StatusNet features as implemented, and try to gauge how difficult adding the necessary elements would be. I'm not going to pull punches. "Let's tie a bunch of unconnected services together and we're done" is not a realistic plan for replacing Facebook (and LinkedIn, and your blog, and ...) successfully. I'll be using the Wikipedia list of Facebook features as a starting point.

  1. Publisher: This is the core functionality of Facebook. You post something. It appears on your wall. It appears on your friend's wall in some cases. etc. StatusNet has a similar setup, but it's feed-based network obviously does things a little differently.
    Status(Net): Mostly implemented.
  2. News Feed: This is the first page you see when you log into Facebook. Users see updates and can "Like" or comment on these updates. Photos and video posts are viewable on the same page. StatusNet's "Personal" tab is similar, but is not the page seen on login (on, at least). This is easily changed, but the tab lacks threaded comments and direct viewing of multimedia. There is a gallery plug-in, but threaded comments are much more difficult to do. Do they even want to?
    Status(Net): Partially Implemented.
  3. Wall: The wall is where all your FB updates go, and where people can respond. SN has your profile page, much the same, but there are, again, missing features like comments.
    Status(Net): Partially Implemented.
  4. Photo and video uploads: FB houses many people's online gallery. It handles photos and videos. They can be tagged. There are comments. All this stuff goes to your news feed. SN has nothing like this. The Gallery program has none of these abilities. This is a hard problem.
    Status(Net): Ground Zero.
  5. Notes: This is a blogging platform with tags and images. It's limited, but it's far beyond anything that SN has. The only option is to use a Drupal add-on to turn it into an SN hub. What's missing from Drupal? I don't know.
    Status(Net): Ground Zero.
  6. Gifts: I know. You're a geek. You hate gifts. You especially hate paying for gifts. Other people give me gifts all the time, though, so there must be some interest. SN doesn't have anything, but gifts seem easy to implement.
    Status(Net): Nothing, but not too difficult.
  7. Marketplace: Craigslist on FB? Sure, why not? SN is in the dark here.
    Status(Net): Ground Zero.
  8. Pokes: What's a poke? Who knows? Who cares! Still, SN would need them because I cqn guarantee the absence of them would become a big deal. Again, nothing, though not too hard, I'd guess.
    Status(Net): Nothing, but not too difficult.
  9. Status Updates: This is what SN is all about, but status updates are public. Could privacy happen on SN? I don't know.
    Status(Net): Mostly there.
  10. Events: FB is the event planner and the place to post the pictures after the event. That's really what it excels at. SN? Nowhere?
    Status(Net): Ground Zero.
  11. Networks, groups and like pages: SN has groups and is getting Twitter-like lists, but there are no networks as far as I can tell. I can "favorite" a post, but there is no way to create a page for other people to "favorite/like." Networks should be just designated groups. Likes need to be implemented in other places, and could be added pretty easily after that.
    Status(Net): Mostly there.
  12. Chat: FB chat sucks, but it exists. SN doesn't really have chat, though there are some IRC and XMPP plug-ins which can fake it. They're not private, though. Ouch. Think someone will get bitten by that one? Sure. Tack on an XMPP server to SN, and you're ready to go.
    Status(Net): Nothing, but not too difficult.
  13. Messaging: FB can be your e-mail client. It can even send mail outside the walled garden (last I chacked). SN has a private messaging feature. With federation, this would operate similarly.
    Status(Net): Implemented.
  14. Usernames: FB lets you get a page with your name, unless your name is the same as someone famous, that is. ;). SN has your profile at a nice, readable URL by default.
    Status(Net): Implemented.
  15. Platform: This is probably the biggest thing FB has brought ot the table and Farmville numbers tell you that it's pretty important. Can SN follow? OpenSocial leads the way. Unfotunately, it doesn't have that viral thing going for it.
    Status(Net): Ground Zero.
So ... where does SN stand as a base for Diaspora to build on? Maybe 50% I guess that's better than nothing. I've heard from a mailing list that the Diaspora guys are going to use OStatus. If that's true, then they might want to think about SN. I understand they have already built a base application, though.

What do you thin? Have I missed any Facebook features that SN needs or already has? Did I get something wrong? Do you think that SN would make a decent social network, or is micro-blogging just the wrong model for it?
Reblog this post [with Zemanta]

Saturday, May 15, 2010

Morevna: Open Source Anime Using Synfig, Blender, Gimp, and Krita

An anime stylized eye.Image via Wikipedia
Most of the readers of this blog probably already know about Big Buck Bunny, Blender's open movie project codenamed Peach; some may even know about Sintel (Durian). Few, however, know about the Morevna project, an anime project dedicated to using only open-source tools in its production. From the site:
The story is based on the Russian fairy tale “Marya Morevna”. It is completely reworked to futuristic high-tech twist with a large amount of technobabble, expounded in a style specific to anime genre. 
Screenplay: RussianEnglish (draft translation)
Synfig is an authoring tool designed from the ground up to do smooth animation without drawing multiple frames in between the key frames, a process called "tweening," meaning that the number of artists required to complete a major project is significantly reduced. The artist defines the position of the objects in two keyframes, chooses a path for the movement, and assigns filters or deformations, and the result is computer generated. I understand that normal anime has very few tween frames and limits motion on the screen to limit the amount of work artists have to do. Synfig's method means a smoother-looking movie with thirty frames per second and the ability to add more animated movement.

The Morevna Project also uses Blender for many of the props, such as the helicopter and the motorcycle in teh video below. I find the mix of 3D and 2D animation a little unnerving, but it is a common style these days which again, reduces the amount of time spent drawing individual frames.

Reblog this post [with Zemanta]

Diaspora Focusing on P2P, Shunning S2S

A profile page  within the social network serv...Image via Wikipedia
I sent an e-mail to the guys at OneSocialWeb asking them about their involvement with the Diaspora team.
Does OWS have plans to reach out to Diaspora to work on federation and specifications with them? I've noticed that StatusNet's OStatus fills a very similar role, as well. Could you involve them, as well? And then there's GNU Social ....
I'd really hate to see so much work going on with the same purpose (federated, Free social networking) but end up with incompatible servers and clients.
Anyway, thanks for all the hard and wonderful work.
Here was the response.
We got in touch with them (Diaspora) but at this stage it seems they are looking into a peer-to-peer approach using Gnu encryption (GPG). So, on a technical level, our projects are quite different. We will however keep in touch with them, like we are with the get6d guys, the GnuSocial mailing list, the other XMPP efforts, and the work at the W3C.
I personally think that competition is good. I don't see it as competition in fact, more as various groups experimenting with different ideas. They are so many ways to solve this: P2P vs client-server, HTTP only vs XMPP, Atom/AS vs RDF, OpenID&OAuth vs FOAF+SSL, End to end encryptions, DRM, etc etc ... I'm sure that out of these various attempts, some good ideas and concepts will emerge. It will be then the responsibility of the bigger players & standard committees to put some order in there.
At our level, we are actively engaged with the XMPP community and the W3C Social Web working group. So I'm confident that we can converge towards a good set of protocols in the near future.
Cheers and thank you for your ongoing support !
It appears that we now have very many competing projects. I hope one of them makes it. I'm much more optimistic about S2S for the average user than I am about P2P. Very few people leave their computers on all the time, a requirement for this type of communication if it's peer to peer. Server to server seems much more likely to enable social interaction when one of the parties isn't available.
Reblog this post [with Zemanta]

Friday, May 14, 2010

F-Spot is Gone: Now Can We Get Rid of Tomboy?

Main screen of Tomboy 0.10.2, a notetaking app...Image via Wikipedia
F-Spot has been voted off the island by developers at UDS this week. The Mono application will be replaced by Shotwell, written in Vala. Since the only other Mono application I see in the default install is Tomboy, would it make sense to conspire to kick Tomboy Notes off the show next week and Mono off the CD the following episode?

This isn't a political demonization of Mono -- I'm actually quite surprised that F-Spot got the boot since so much work was done to it in order to have it be a stand-in for the Gimp. Still, it makes little sense to keep Tomboy and the Mono runtime when space could be freed on the CD by using Gnote, a C+ program that is basically a drop-in replacement for Tomboy.
Reblog this post [with Zemanta]

Thursday, May 13, 2010

TerminateSafe, Save-free Applications, and oom_adj

The logo of freedesktop.Image via Wikipedia
There's a good proposal going through the rounds on the mailing list which seeks to help the kernel decide which process(es) to kill first in the event of memory pressure. The proposal is simply adding a TerminateSafe=true key to the .desktop files of applications which are stateless or which constantly save user data. Examples of stateless applications might be the character map and calculator tool. Applications which save continuously might include Tomboy. Terminals, office suites, and IDEs would not have this key set to "true."

The Linux kernel's OOM (out of memory) Killer has a method to determine which processes get killed first in a critical situation: it's called "magic." No, not really. It is complicated, though, and it offers a "user-friendly" way to help the kernel make its decision. oom_score is located in /proc/ under the number of each running process. The oom_score for each process is based on factors such as memory use and running time. The process directory also contains oom_adj, which, as the name suggests, allows one to adjust the score of the process. Think of it like nice, but for killing, not running.

If TerminateSafe=true were implemented on Linux, it would likely signal the desktop environment to adjust oom_score to put the application on the top op the assassination list using oom_adj. Other OSes might use a different method or not support the key at all.

The problem with this approach is that the process the .desktop file spawns may not even be the main process if the file calls a script. The situation is made worse by processes that fork. Luckily, the kernel offers process groups to handle this situation, something Systemd is planning to use.

I hope you've heard of Systemd, the init/upstart replacement with a model half way between launchd and inetd. If you haven't you should take a look at it.
Reblog this post [with Zemanta]

Monday, May 10, 2010

First Impressions of the New Unity Netbook Interface

Mark Shuttleworth and Canonical unveiled the new Ayatana work that has been happening. Unity is a new interface for Ubuntu Netbook Edition which targets the instant-on market. Other instant-on players include Phoenix Hyperspace, ASUS Express Gate (Splashtop), and Xandros Presto. Linux basically owns the space with OEMs. These OSes are stripped down to almost nothing and typically don't access the filesystem at all, instead running stateless. Expect ChromeOS to compete strongly in this market when it's released later this year.

Ubuntu will be following the normal market trend of releasing custom-built images for OEM hardware, working with the manufacturers to get the boot time as low as possible. Shuttleworth is claiming that Unity will have a 7 second boot time using SSD.

What does Unity look like on my netbook? Take a look.

Compare this to Hyperspace:

And to Express Gate / Splashtop

Unity give more of a functional look to the genre, departing from the other's spartanism, which I think will give customers a feel of more control and greater ability.

As far as I can tell, it is not possible to modify the left launcher directly. I tried to add Chrome (my preferred browser), but couldn.t do it until I already had Chrome launched and the icon appeared in the launcher, allowing me to right-click and pin the application to the panel. You can use the top folder icon to gain access to installed applications not on the launcher panel. If that icon choice seems unintuitive to you, know that you are not alone in thinking that.

The interface is clean and easy to use. Within two minutes, I had discovered everything I needed to know to get to work. The panel iicons have an arrow on the left side when they are running, and the active application is identified by an arrow on the right. If you have more than one window of a given application open, right clicking will give you a scaled view of all the application's windows.

My one complaint about the interface is that it is quite easy to get notification windows which bleed off the bottom of the screen., but that shouldn't be a problem for most people using the interface as designed. Unity is already 90% there, and you can expect that Maverick (to be released 10/10/10 -- is he a numerologist?) will operate amazingly well with this interface.

Reblog this post [with Zemanta]

Thursday, May 6, 2010

New Project: Live Help Links

Screenshot taken in Ubuntu running GNOME deskt...Image via Wikipedia
While sitting in on Monday, I decided that adding links like that one to the Ubuntu docs would add some "Live Chat" feel to Ubuntu. I proposed the idea to the Docs Team, didn't get much in the way of negative feedback, and decided to go forward with the plan.

There are several things that need to be worked out:

  • Since throwing everyone into #ubuntu would be a bad idea, we probably need to use application-specific links
    • Are these for Ubuntu only (#ubuntu-application) or do we use the official channel?
    • Do we include support for all applications (a huge undertaking) or just default ones (easily manageable)?
  • How do we make this prominent and easy to understand?
  • Since not all applications have official support channels on Freenode, how do we handle the others?
    • Create new channels on Freenode. This answer is especially useful if we went with #ubuntu-application channels and less appropriate if we want official channels.
    • Use a service like We are depending on a third party, which is not something I want to do long term.
    • Launch Empathy. Empathy's IRC support is terrible, and it doesn't support passing irc:// URIs now.
  • How do we handle internationalization. My thought is to leave that up to the language teams. They can decide what channels to use based on whether they exist and/or are populated.
Quite obviously, I'm not the first person to think of this: there are two projects on Launchpad that are similar (but dead). They depended on GAIM, though (and that should tell you their age), while I am proposing a method which works without an installed IRC client and which works across all flavors of Ubuntu. There have been many moves to get XChat into the default install to push more live help.

Reblog this post [with Zemanta]

Wednesday, May 5, 2010

Too many places to click!

What's the problem with tabs (or any other MDI)? I suddenly have two place to click to choose which application to use. We've tried fiing this problem for years. First, when we use mostly SDIs, the computers weren't powerful enough to make this a problem. When it did become a problem, we tried grouping windows together in the same taskbar entry. People hate it. They lost windows all the time. Later, we "solved" this problem by using tabbed interfaces to simplify the taskbar, but we've really only moved the problem.

Like a lot of people, I run 60-70% of my apps in the browser. Maybe more on some days. A lot of these applications are my first choices. It screws with me. Let's say I'm listening to music and I want to change my playlist. Do I go to the taskbar (if I'm listening in Rhythmbox), Go to the notification area (if RB is hidden there), or go to one of my browser tabs, possibly in another browser window (if I'm using Pandora or the like)? I can't train myself because the situation is always different?

I have taskbar buttons and tabs, then I have more tabs inside my tabs for apps like Zoho, and I have the system menu, the application menu, and quite possibly a third menu inside my browser. I can't even remember whether the web page I'm looking at is even in the web browser, or whether it's in Miro or Rhythmbos. Arrrrgghh!

Are a global menu and a tabbed window manager part of the answer? I don't know. What do you think?
Reblog this post [with Zemanta]

Tuesday, May 4, 2010

Ubuntu 10.04 LTS, Amazon S3, and OVF

I want to throw up a video about OVF and virtual server portability which I think is worth watching owing to Canonical's intent to move more into cloud services for its servers.

Ubuntu Week's Social From the Start Session -- Ubuntu Updates and the MeMenu

Screenshot-GwibberImage by parttimesock via Flickr
I posted a suggestion to jcastro which he seemed to like: setting up Gwibber on OS installation to be set following Ubuntu and Ubuntu One announcements so that users can be aware of major bugs in updates (and not update or find the fix) and know about Ubuntu One outages.

I understand that Ubuntu (or Canonical) runs its own StatusNet server, so accounts could be given to new users during the installation if they didn't already have one. More integratiion could be found between help and application resources using status update messages.

There were many proposed services: Google Buzz has planned support, PicasaWeb was suggested, and there was even official talk of linking F-Spot to Gwibber's new API to allow viewing friend's photos inside F-Spot.

Qense also mentioned using Software Center to share thought about a certain application. I suppose it could be used to ask questions, as well.

There is talk of getting the Gwibber service to run headless on an Ubuntu server installation so that LoCo teams can use their new StatusNet subdomains automatically.twitter
Reblog this post [with Zemanta]

Ubuntu Open Week's Ubuntu One Session.-- Music Overages Handled, and The Future of Sync'ed Preferences

Maverick might end up eschewing GConf for DesktopCouch in order to have preference sync'ing. Is this more of a sign of Ubuntu drifting from GNOME?
<rodrigo_> daengbo, I am planning on writing a gsettings (gconf replacement) backend that stores config settings in desktopcouch
[11:40] <qense> Please not that GConf is planned to be deprecated in the future in favour of GSettings/DConf.
[11:40] <qense> note*
[11:40] <rodrigo_> daengbo, so yeah, once that is available, you could have all your settings on desktopcouch

Also, the music overage problem has been handled by simply allowing it.
daengbo asked: How are music purchases handled when your 2GB quota has been reached?
[11:53] <+aquarius> Music purchases can still be made if you've reached your quota

Monday, May 3, 2010

Balsamiq Mockup Tool for Linux

Looking at Mark Shuttleworth's mockups for Maverick Meerkat made me wonder about the tool he used. Luckily, the tool was mentioned in the images -- Balsamiq.

Balsamiq is a nice tool. You can try out a Flash version for yourself. There's a pop-up after five minutes, but you aren't prevented from continuing the trial, which is nice. While the application is designed to help teams work together, and the website touts "collaboration," Balsamiq doesn't appear to have real-time collaboration.

Balsamiq is $79, which is almost nothing if your business is design. The Linux version is available for demo download here.
Reblog this post [with Zemanta]

Ubuntu Netbook Remix's New Look for 10.10 (Maverick Meerkat)

Mark Shuttleworth had talked about putting the application in the top panel for Ubuntu 10.10 UNR, and now the first mock-ups have arrived. In addition to that change, the status panel is gone and status messages are transient overlays a la Chrome Browser. Application-specific notification icons ("Windicators ... ahhhh) appear in the titlebar, which is now drawn by the application.

I think this is a lot of work. Ubuntu and Canonical haven't been big on diverging heavily from GNOME or Debian. Since GNOME is moving to Shell and the Mutter window manager, this appears to mean that Ubuntu will be forking. Do the Canonical devs have the dedication to make this happen? How buggy will the first version of this forked window manager be? Will the windicators (I grimace even writing it) provide true benefit.
That was the negative. Now for the positive. If Ubuntu sticks with panels and doesn't move to the GNOME Shell, I think it will be a good decision. Scrapping everyone's understanding of an interface and completely starting again will only hurt Ubuntu adoption where Canonical wants to make money -- the enterprise. As I've said in previous articles, "Don't change the UI." These indicator additions seem like smart and intuitive additions to the WIMP desktop model.

If Ubuntu diverges from GNOME (and probably Debian, since it rarely customizes upstream projects more than necessary) by ditching Mutter and the Shell, does it have the chops to keep up? For Canonical, that seems to be the $20,000 question.

See the full story from Mr. Shuttleworth here: Mark Shuttleworth: Window indicators (
Reblog this post [with Zemanta]

Unscrewing Your Failed Ubuntu 10.04 Upgrade (a.k.a. "It's all my fault, honey!")

This isn't a blog post aimed at the general public -- it's for my gal who's twelve timezones away. I'm putting it on my blog because there may be other people in the same situation who could follow the directions. My older post complaining about the HP P1006 situation got a lot of hits, for example. Here goes.


Hi, honey! Sorry I screwed up your machine. It's all my fault. I should have know that the upgrade wouldn't work after all the customization I'd put on the old system. You totally get Linux, though, so I know that you can follow these instructions to get your beautiful system back.  The bad news is that I'm going to use a couple hours of your timie to unscrew what I screwed up. I know you'll still love me, anyway.
What are you going to do?

Let's talk about the steps before we start, OK?

  1. Back up your data (just in case)
  2. Get rid of your strange application preferences
  3. Download Ubuntu 10.04 and write it to a blank CD or USB key.
  4. Install Ubuntu 10.04 without writing over your personal files (be really careful here!)
  5. Install Ubuntu Tweak to help you with getting all your old applications back
  6. Add the extra applications you need
That should be it. Are you ready to start?

Back up your data (just in case)

I have lots of disk drives in the old media server that are as big as your desktop's hard disk. You can use any of those by opening the case and pulling one out. The ones with the red cables are better and newer than the ones with the wide, gray cables. Take the cable out with the drive, and look to see how it's connected. Be careful, though: don't bend any pins. Turn off your computer and open the case, then plug in the disk drive the way you saw it before. Close the case.

When you turn your computer on, Ubuntu will automatically recognize the drive, but it's not formatted correctly, so you'll have to do that. Go to Places > Computer, and double-click to check that it's the right one. It shouldn't be readable and Ubuntu should give you an error message. Right click on the drive and choose "Format." If that doesn't work for some reason, you can use System > Administration > Disk Utility to do it: choose "Format drive" in the top part and "Format volume" in the bottom part.

When you're finished, the drive should show up in Places > Computer and you can open it, then copy all your files to it. It may take hours to copy. You have a lot of stuff. When that's finished, click the little eject symbol next to the drive in the file manager, shut down, and disconnect the drive. You don't have to pull out the drive.

Get rid of strange application preferences.

I don't want the newer desktop application versions to get confused by old configurations, so you need to delete them. Since you're an old DOS gal, I know you love the terminal. Open one up and type the following exactly
gconftool --recursive-unset /
The first, long dash is actually two dashes.

Download Ubuntu 10.04 and write it to a blank CD or USB key

Go to and click the "Download Ubuntu" button. On the new page, under "Begin Download," click on "Alternative Download Options." The "Other Download Otions" column has a "BitTorrent Download" link. Click that. On the next page, choose  ubuntu-10.04-desktop-amd64.iso.torrent. I put the direct link here, too for you. Your BitTorrent client should download the CD in an hour or so.

You can either write the image to a CD or use a USB key. I think the USB key is easier. You have the 2GB one, right? Put that in the computer and wait for it to register. Go to System > Administration > Startup Disk Creator. Choose the CD image and you USB drive (it'll probably be /dev/sdb1). You might need to erase the drive. Next, click "Make Startup Disk" and wait until the program finishes. While it's doing its work, you can shut down all your other programs. When the program is finished, reboot.

If the USB key doesn't work or you don't have one, you can write the CD iso file onto a CD. Put a blank one in the drive (I have blank ones in with the rest of the computer stuff), cancel anything the computer wants to do, find the CD image, right-click on the file, and choose "Write to CD." When it's finished, simply reboot.

During the boot screen, you need to press a button to enter the boot menu. The screen should tell you what button that is. It's normally F8 or F12. Choose your USB key in the menu and hit Enter.

Install Ubuntu 10.04 without writing over your personal files (be really careful here!)

At Ubuntu's boot menu, you can choose either English or Thai, and choose to "try Ubuntu." You should boot into a full desktop. Double-click the install icon and start the installation process. Everything is pretty straightforward. I don't think you'll need any help, but if you do, here are the installation instructions.

Don't erase your drive! On the "Prepare disk space" page, choose "Manual," then choose your hard disk. This is where the directions get kind of complicated. There are two possible setups for your hard drive. I don't remember which one was used.

  1. The first an most probable setup is that I used a separate paartition on the disk for your /home. I like to do that. When you look at the disk partitions in the Ubuntu installer, there will be one (/dev/sda1) that is around 20-30GB and another (probably /dev/sda2) that is 200+GB. If that is your situation, Click on the 20-30GB partition, select to edit it, choose to format the partition, and choose "/" as the mount point. Click "OK.' Next, choose the 200+GB partition, click to edit, make sure that "Format" is unchecked, and choose "/home" as the mount point. Click "OK." When you are finished, click "Forward."
  2. The second, less likely possibility is that there is one, big partition of around 230GB. In that case, choose the partition, lick to edit, make sure that "Format" is unchecked, and choose "/" as the mount point. Click "OK." The installer will warn you about deleting some folders. Don't worry.
In either case, finish the other installation steps, making sure that you use the same username as before. Reboot into the new system when you are finished.

Install Ubuntu Tweak to help you with getting all your old applications back

In your new system (with your old files), open Firefox. Go to the website and click "Download Now!" Follow the instructions to install the application. Run the application from Applications > System Tools > Ubuntu Tweak. In the left panel of the application, there will be an entry called "Source Center." Click on it, then click the "Unlock" button. Put in your password. Don't worry about the warnings about third-party software in this case.

Choose the following by clicking on the check box.

  • Adobe Flash PPA
  • Chromium Browser Daily Builds
  • Docky, Elementary Desktop PPA
  • GNOME Global Menu PPA
  • Medibuntu
  • Nautilus Elementary PPA
  • Skype
  • Ubuntu Tweak Stable
  • Ubuntu Wine Team PPA
Click "Refresh," then after the computer downloads the package lists, click "Select All" to install all of the new packages.

Sometime during this, Ubuntu should have told you that new drivers were available to you and put a little computer card up next to the clock. Click on it and install the recommended NVidia driver. Now, you'll need to reboot.

When you restart, you can

  • Start Docky (Appllications > Accessories > Docky), 
  • Remove the bottom panel,  
  • Remove the top menu (right click on the menu and click "Remove from panel")
  • Add the menu button to the top panel (Right click on the panel and choose "Add to panel," then choose "Main menu.")
  • Add the Global Menu to the top panel using a similar method as above.
After that, I think you're done. That wasn't so bad, was it? OK, you can kill me in June when you see me next.
Reblog this post [with Zemanta]

Building43 Interviews Evan Prodromou, Founder of StatusNet

Image representing StatusNet as depicted in Cr...Image via CrunchBase
This video covers what StatusNet is, its similarities and differences to Twitter, and a bit about its future. There are also great discussions about the advisability of enterprises putting their marketing in Facebook's and Twitter's hands and the growing use of standards like PubSubHubbub, and WebFinger in other services like Google Buzz. The video is almost twenty minutes long, but it's quite interesting throughout and suitable for both the microblogging veteran and the absolute beginner.

Status.Net is an open-source, federated microblogging platform which runs on about 20,000 sites, the most famous of which is The service is generally API-compatible with Twitter and Status.Net servers can communicate with each other out of the box, as well as many other social sites like Tumblr and is a RackSpace-sponsored site which aims to "bring together thought leaders in a variety of disciplines and organizations, from entrepreneurs to those responsible for the latest technologies. They will share knowledge, experiences and advice on how you can use these cool new tools and apps."
Reblog this post [with Zemanta]

Sunday, May 2, 2010

Lead OGG Dev Responds to Jobs' Jabs

Image representing Steve Jobs as depicted in C...Image via CrunchBase
Xiph's Gregory Maxwell, the designer and lead dev of the OGG container and the Vorbis audio and Theora video codecs, had a few choice words for Steve Jobs over his recent suggestion that OGG Theora would soon be in court over patent infringement. What did Steve say, exactly?
 "All video codecs are covered by patents. A patent pool is being assembled to go after Theora and other “open source” codecs now. Unfortunately, just because something is open source, it doesn’t mean or guarantee that it doesn’t infringe on others patents. An open standard is different from being royalty free or open source. 
Sent from my iPad"
Sigh. You've got to love Apple's self promotion. Gregory had this response on the Theora mailing list:
It would seem both surprising and remarkably underhanded, even considering the probable involved parties, to undertake constructing a patent pool for some product without ever consulting the vendor of that product: Surely no good faith effort to construct a valid and usable patent pool for a codec could be undertaken without contacting the developers of the codec.
In particular— according to the US Department of Justice "A licensing scheme premised on invalid or expired intellectual property rights will not withstand antitrust scrutiny." So, even though it is apparent that the or its participants would have no interest in receiving royalties from such a pool a failure to contact the developers in an effort to determine the validity of any potential patent claim would be unconscionable.
Since the developers of Theora have received no such contact, I can only conclude that no such effort is being undertaken and that the quoted statement is either a forgery, the result of a misunderstanding, or that the statement may be indicative of a dishonest and anti-competitive collusion by Apple and other H.264 patent holders to interfere which the development, promotion, and utilization of unencumbered media standards.
If you've read Gregory's dissertation response to accusations that OGG is a technically inferior container format, you'll know that he is both eloquent and confident. He goes on to say in another e-mail from the list:
The specific standards process used to develop the MPEG codecs creates patent minefields that royalty-free codecs don't generally face. Because many knowledgeable people have heard of the problems faced by these patent-soup standards, they may extrapolate these risk to codecs developed under a different process where these problems are less considerable. This is a mistake, and I'll explain why here. 
Recently there have been a number of prominent statements along the lines of "all video codecs are covered by patents" and "virtually all codecs are based on patented technology". 
These statements are carefully engineered FUD spread by the license holders of competing formats in order to discourage the use of unencumbered alternatives. They are careful to avoid naming WHO owns these supposed patents or WHAT is actually patented, because such specific statements would allow the victims of this FUD to petition a court for a declaratory judgment of non-infringement.
This FUD is particularly effective because there _is_ a widespread misconception that media codecs are a patent minefield to a greater extent than other areas of software. 
Certainly this is the case for the MPEG codecs, but it is not a universal truth. To understand why, you must understand a little about the process used to build these international standards. 
The reason the MPEG formats are so thoroughly encumbered by patents is that the process used to build the formats is designed to be "blind" to patent considerations: all the participants have agreed that any patents they hold will be licensed under "Reasonable And Non-Discriminatory" terms, a term of art which few normal people would actually describe as all that reasonable or all that non-discriminatory, as RAND often means "quite expensive". With only that assurance in hand, they go about constructing their formats through an extensively political tournament process where proposals are made and encouraged to be combined. 
So no effort is made to avoid patents, but it gets worse: 
If you're a participant in this process, it is very important that some of your patented technology make it into the result: if it doesn't you'll end up having to pay the same royalties as the rest of the world, but if it does you can cross-license your patents with the other "winners" and completely avoid paying to use the resulting format. 
So even if you're not looking to make a profit from your participation, you'll be sure to get some patents into the result so that you don't have to _pay_ for the result of your own labors. 
As a result these formats end up rife with inconsequential or even detrimental patented techniques which could have _easily_ been avoided, as essential elements. 
—— and this is the outcome when all of the parties are playing by the rules. For an in-depth analysis of the mess that patents are making of standardization, see: 
It doesn't have to be this way. Most media coding patents are exceptionally narrow, as it's much cheaper and easier to obtain a very narrow patent. The fact that a patent can be trivially avoided— often by something as simple as changing the order of a process— isn't a problem for patents designed to read on standards, since the standard mandates doing it "just so". 
By starting out with the premise that you want things to be royalty-free and not merely RAND, you remove the incentive structure that encourages the creation of minefields. By being cognizant of the risk and sticking close to the known safe prior art, rather than the willful patent entanglement of the MPEG process, the risk of surprise claims by third parties is also reduced. 
The problem of patents isn't eliminated— they are still a costly burden on the developer of any standard, but the environment surrounding the MPEG patents is simply not a good indication of the real difficulty. 
The process used by MPEG is ultimately counterproductive. By being "blind", what they are actually doing is encouraging a kind of patent cold war. At the end, even the inventors and fully paid-up licensees of those formats end up in court—fallout from playing with these dangerous toys. This can only be avoided by rejecting the taint of encumbered technology, and accepting the challenges and compromises that come from doing so. Or, in other words, the only way to win is not to play.
Well, we certainly know where he stands on this matter. I hope he is correct, but I've got little faith in the patent system and even less in east Texas, where this fight will almost certainly be fought.
Reblog this post [with Zemanta]

Friday, April 30, 2010

Tinycore Linux and "On Demand" Computing

I've moved recently, and the only computer I've got working right now is my old Intel Classmate notebook. It's slow, but it works well for 95% of what I want to do, even if the keyboard is a little small for my giant claws. Anyway, I was running Ubuntu 10.04 Beta when the Xorg memory leak bug hit, and I used that as an excuse to try some stuff I'd been thinking about for a while.

I installed and tried Fedora 13 Beta for about a week. I got really hands on with it, and I have some pros and cons that I'll (hopefully) cover this weekend. I also tried Tinycore Linux, which some of you may never have heard of.

Tniycore is ... tiny: it's 10MB, which puts it right at the bottom of the "small Linux" distros. It's also very core. There are no apps. It boots to a minimal desktop (WM, built for Tinycore) with a small dock (Wbar), and nothing else. Oh, there's a terminal, a control panel, and an app installer (using FLTK). It feels very much more "then" than "now." Believe me, though, it boots fast. From my SD card, the desktop is fully functional in 3 seconds -- my SD card is slow.

By default, Tinycore boots into "cloud" mode, which is like a live CD, but it runs completely from memory. With only 10MB, you can understand that running in memory isn't a problem. You can also guess how blazingly fast it is. When you want to run an app, you open up the apllication installer, search and choose (many are availbale), and click "Install." The application(s) appear in your dock.

Everything continues to run completely in memory. Installing means downloading a TCZ package, which is really just an archive of the binary, along with a hash file and a dependency file. Dependencies are handled automatically. When the package is installed, the original files are deleted to make room in RAM. Starting an application is almost instantaneous, even for a big app like Firefox. Since the package format is so simple, even the newest software (like PCManFM2) is available. The simple package format also means that the application installation takes almost no time, even on my little netbook. Chrome Browser installs in about 1.5 seconds, for example.

You can also set Tinycore to run in another mode, called TCE. If you specify TCE mode on boot and give a location to save, Tinycore will save all your packages to that location so that you won't have to download them again. You can also set applications to be loaded automatically "on boot" or only when first launched, "on launch." Applications aren't permanently stored in the filesystem unless you go to real trouble to do it. They are always freshly installed, either at boot or at launch.

This completely original distribution takes a new approach to computing, and that is on demand. Imagine if my computer just PXE booted to Tinycore -- how fast would it be? With GbE Internet connections coming, and 10GbE after that, how much sense does it make to store my OS on my desktop. Network speeds eclipse most hard disk speeds, even now.

I can see putting up a server in my house for PXE booting a custom image of something like Tinycore, with apps set "on launch" on an NFS directory from the server. This is starting to sound a bit like LTSP (which is a great project I've deployed a couple of times), but everything here is local and running completely from memory. Applications will launch faster than their HD-bound cousins since the network is quicker than my HD.

Why do I say that? Let's look at what happened to me two nights ago. I installed Ubuntu 10.04 over the network, using just a kernel and initial ramdisk as a starting point. It's my favorite way to install when I have a decent connectiion. (I'll write about the experience soon). I download the base Ubuntu system (2 minutes) and installed it (10 minutes). Next, I downloaded the full desktop (6 minutes) and installed (2.5 hours!). This is on a 12Mb/s network with a 30GB netbook HD. Imagine the speed on 100Mb or even Gb Ethernet.

Where does this all end? SaaS, folks. It's coming. On demand computing will be here soon.
Reblog this post [with Zemanta]

Saturday, April 24, 2010

My Thoughts on "Then and Now"

There's a meme appearing on Gnome Planet and Ubuntu Planet where people post their first Linux desktop and compare it to what they run now. I thought I'd put my two cents in. First, my first!

Red Hat 5.1, courtesy

It looks primitive, I agree, but let's compare it to what I was using before.
Windows 95, courtesy of

You know what? They weren't that different. Win95 had Plug'n'Play, but it worked so badkly that it shouldn't have been a feature of the OS. Memory was unprotected so the OS would hand for no reason whatsoever. Moving to RH5 seemed like a joy, once you got it set up. That was the painful part. Oh, and it supported like three pieces of consumer hardware, so I had to go out to buy a new, "real," modem. Netscape sucked, but so did IE.

Let's move forward to today, on my netbook:

There's a better theme, and the controls all moved (but that happened years ago with the move to GNOME 2). Still, there's not a lot different in the UI. It makes my point from last week -- don't change the UI. The core libraries, though are completely different. Applications can easily communicate, and there are standard libraries for things like communication, media, and document rendering. None of that was true of either RH5 or Win95.

Where's the competition?
Window 7, from

Similar level of difference from Win95 to Win7 as from RH5 to Fedora 13, eh? (I thought about trying to get RHEL 5.5 and taking a screenshot just so I could say "From RH5 to RH5.5, but I didn't.") Again, there is some flash, and the internals have all been replaced (by NT!), but the basic UI isn't significantly different.

I thought we'd be further along by now. Where's my flying car? I guess I'll look to the smartphone market to see the changes I really want.

An Open Letter to the Mozilla and Chrome Developers

Image representing OpenID Foundation as depict...Image via CrunchBase
At the close of F8 and knowing Facebook's plans for world Internet domination, I want to request that the developers of the two best browsers, which hold 30% of the market between them (only going up from there), help their users out and provide another option. Please implement XMPP in the browser and support the social networking aspect of OneSocialWeb. "Why should we do that," you ask?
  • I'm tired of creating new accounts at every website on Earth. OpenID offers a way around this using OAuth, but I need to choose and type in my OpenID provider. My XMPP provider can be used as my OpenID provider (as OneSocialWeb does) and the browser will know my identity, making it easy to connect. The website I connect to will simply become my "friend." Of course, Mozilla and Chrome need to implement private browsing and profiles so that I can have several identities or even remain anonymous if I so choose.
  • Once a website is in my XMPP contact list, I can give the site atomic permission to view only the parts of my profile and activities I choose, whether the limits be by network, group, individual, or other criteria. The access to this information is securely based on XMPP's permission system, which is robust. Much like Facebook's new permission system, websites can use this limited information to customize the site experience for me and give me more information about others I know who are also on the site. I could even delegate authority, for example,  to edit a Flickr photo album of a party to one or two of my friends who were also at the party.
  • The site could talk to me about things that I would find interesting. It could update me both in real time and on my OneSocialWeb news page about changes, what my friends liked, or whatever. Best of all, I could choose to deny the website access to my feed and simply ignore anything from the site.
  • I could see my friends status updates, including ones that were private for me only, for groups, or for the public, and wouldn't need to rely on Twitter or (StatusNet) for that any more. Not to be a doom-sayer, but I don't think Twitter has enough of a business model that we should be programming it into our infrastructure. It's closed, to boot. 
  • I could do this kind of stuff without being tied to Facebook ... or GMail ... or Twitter ... or any particular provider. I or my company could even host a server. XMPP is federated, you see, and doesn't tie anyone down. OneSocialWeb doesn't say where your data needs to be stored (or kidnapped). I would have control of of my data. No one would own it or me.
  • I could ditch the multiple IM logins I now have, but which I rarely use because of the pain they cause me. With Mozilla and Chrome users on board, It would be easy to communicate with any of my friends. I wouldn't need to create yet another account on yet another IM server, download a client (or configure a multi-client) and start talking. For my friends who don't have XMPP accounts, I could just recommend changing browsers (yours are better, anyway). Best of all, there would be no walled gardens. I could invite two friends who didn't know each other -- and who were previously on different IM networks -- to join a group chat with me, introducing them to each other. How novel is that?
  • XMPP supports VOIP and video chat. My friends would no longer need Skype, yet another service I needed to sign up for and maintain.
  • Mozilla and Google could use OneSocialWeb to increase their brand by offering logins to their own servers by default for new XMPP users in addition to allowing a simple user@server + password login for current XMPP users.
  • Google and Mozilla are trying to do this stuff anyway, but at cross purposes. Google Buzz appears to be stillborn and Google is having trouble with getting privacy permissions correct. Mozilla Contact is working on connecting social networks people already use, much like Pidgin connects IM networks (i.e., not really). Why not use open standards, technology, and source code? Use OpenId, OAuth, and XMPP. 
  • Finally, it exists now. Servers and clients are available. Take the reference code from OneSocialWeb and adapt it to your browsers. It's less work than doing it from scratch.
Of course, I support Opera, Safari, and Internet Explorer adding social network support through XMPP, too. Since the reference code is Apache licensed, it's available for closed source projects, too. I just think that you, Chrome and Mozilla devs, are more likely to listen to my pleas.

Sunday, April 18, 2010

Change the back end, not the UI

Image representing Salesforce as depicted in C...Image via CrunchBase
I watched four hours of the Google Atmosphere event yesterday. Sure, a lot of it was Google preening and PR, but there were a lot of surprises. The iPhone and Blackberry were mentioned much more often than android phones. Several different OSes were used for demos, along with different browsers. MS, Zoho, Amazon, and several other Google competitors were mentioned as viable alternatives, which definitely breaks the Marketing 101 rule: "If you're the market leader, never mention your competition." (UFS, why can't you act mature?)

That's not really what this blog post is about, though. I want to mention a couple of gems that were buried deep in lectures and demos.'s new social layer (Chatter) is a blatant rip-off of Facebook. They admit it. They even revel in the fact. Why?  Everyone in the new generation knows Facebook, and everyone understands it. Training costs to start using Chatter are almost zero. Turn it on, people immediately get it, and they immediately start using it. It doesn't matter that the interface for FB sucks or that a new kind of interface would be more efficient.

That brings me to the second, related point: don't change the interface. Add functionality on the back end, but leave the interface alone. The automobile analogy was almost required. Repeat: leave the interface alone. I fear the day GNOME 3 comes out, no matter how clever and "intuitive" it is. I much prefer the work around Elementary Nautilus and integration of Zeitgeist and Tracker. In fact, my old notes for GNOME 3 were pretty much total integration of tagging into the desktop and every application, while leaving the tags to appear as directories in the file manager.

Just something to think about. Cue comments about button placement in 3, 2, 1 ....
Reblog this post [with Zemanta]

Other I' Been to Ubuntu Stories

Related Posts with Thumbnails