Two interesting things happened this week
the "R" programming language was talked about in the mainstream press.
I discovered that the openbsd folks are working on using a non-gcc C compiler (pcc). Turns out in the non-linux unix world there is not so much love for gcc.
This makes some sense. gcc is huge and hard to work with. When it compiles it uses every available resource and eats the machine. It's not that easy to port or maintain. And it keeps getting bigger.
"pcc" on the other hand is 5-10 times faster, generates reasonable code and is easy to port and work on. Someone is actually maintaining it. This is the same pcc some will remember from Bell Labs in the '70s, back when all the world was a pdp-11 (just before all the world became a vax).
And, as eco systems go, it's good to have more than one option. Linux is competely dependant on gcc. Netbsd on the other hand, is not.
(I've been working on getting netbsd to run on a my vax 11/730 again, so I've fallen back in love with netbsd. Well, I think we're more like friends with benefits, but please don't tell linux I've been cheating.)
anyway, I thought that was interesting. Apparently the pcc maintainer is planning to add PIC support to pcc this winter which is one of the missing features needed.
I have a couple of machines running Ubuntu. More and more lately.
One machine at home was running Ubuntu 7.04 and mythtv. I was loath to change it because it was working and I hate having to type "ssh" when I'm watching tv.
But, I finally did it over the holidays. First I upgraded to 7.10, which was a pain. 7.04 is not longer supported and I have to hack the apt config file to point to the archives. But this broke the upgrade. So I ended up starting with a archive pointing apt config and then switching it in the middle of the upgrade to point to the normal repository. A bit hair raising but it worked.
Once at 7.10 I could cleanly upgrade to 8.04 and then to 8.10. I did it all via ssh and and it worked with very few problems. My hat is off to the Ununtu guys.
Getting X to work consistantly with my Nvidia display card was no so easy. With some releases there is support from Ubuntu. But not 8.10. For that I had to go back to using the linux install script from NVidia, which does all sorts of fun things under the covers.
But now I have mythtv back, and I'm running 8.10 and all is well with the world.
That went so well I upgrade a machine at work and it too did the right thing. Very nice.
Sadly I'm about to wave goodbye to RedHat. I with I didn't have to,but they don't seem to be providing the same level of coolness that Ubuntu us. The "apt-get" system is just too nice. I never want to see another rpm again.
Virtually everything I wanted was available from apt-get. Amazing. And easy!
I love my tivo. I've added a big disk to it. But I want to see more of my "personal media" on the tv (pictures, movies, etc) so I decided to make a Mythtv box.
I made a nice new pc in a "stereo cabinet like" box. I used a motherboard with built in HDMI output (very nice) and a 64 bit AMD cpu. I got all this from Mwave.
And of course I stuffed a dvd cdrom drive and a huge SATA disk drive.
Using a motherboard with HDMI out was key, becauase I just bought a big lcd hdtv.
When it arrived installed Ubuntu using this page:
MythTV_Feisty_Backend_Frontend
I bought a Tivo remote control because everyone in my house knows how to use one. I got it from WeaKnees. I love the stuff they sell.
I then found I had an "IR problem". I solved it with a USBUIRT device, which I also love. Get the one with the 56k detector. It "just worked".
Interesting bits:
I found a tivo control file which described the tivo remote. And then I had to hack the mapping file quite a bit to get it to be the way I wanted.
Support for the nvidia chipset was not in the kernel. I used an install package from Nvidia which was scary. But it did provide a nice X windows based config tool which turned out to be handy.
(X windows looks *really* nice on my hdtv. It makes me thing my next monitor for my office will be a 40" lcd hdtv. why not? huge screen!)
Result:
After many apt-gets and much twiddling I now have a nice Mythtv box which responds to a tivo remote. I can view my video and pictures and watch Jim Lehrer on the big screen (my record list is only PBS and F1 on Speed :-)
I may still get a HD Tivo, but the MythTv is a very nice adjunct and allows me to do things in a nice linuxy way with my home media. I plan to try firewire next and if that works ok I may skip the tivo hd...
Just a quick note on bookmarks. I use several laptops, a common machine at home and a workstation at work. This can sometimes get confusing when I save web bookmarks in various different places.
I recently discovered <a href="http://www.foxmarks.com/">"foxmarks". I only used Firefox, and foxmarks is an extension which syncs up my bookmarks everyplace I install it. It's very handy.
I also discovered "gmarks", which places the bookmarks in a window on the left makes them easy to use. I'm still on the fence with gmarks
Sometimes I wish there was a way to make cross platform "notebooks" with firefox, which would combine a directory tree under SVN control, bookmarks, pdf and an email folder all into one. And make it easy to archive.
It's often a good idea to make the root file system in a embedded system read-only. If you do this and only make changes to files in a ram disk (mounted under /tmp, for example) the device will always come back to a known state when powered up. This is a nice feature and often a requirement.
But sometimes you need to make changes which persist. Check out the "mini fan out" file system. It allows you to layer changes on top of a read-only file system.
I have not used this package directly, but I plan to shortly.
I ran into something in the announcement of the 2.6.19 kernel called "sleepable rcu". I found this wikipedia entry for rcu. It gives some nice background on rcu's.
I don't normally place much credence in wikipedia pages but this one seems reasonably good. And it explains why RCU's are a better alternative to multiple-reader-single-writer locks.
You don't care about this unless you're doing low level work on multiple CPU systems, which I do from time to time. I'm working on several at them moment, each small.
I'm new to rcu's but they look like a very low cost way to manage updates to high(er) contention data items which are updated (note the work "generations" in the wiki). This concept looks a lot like various academic techniques I've read about over they years to create safe software locks.