mount /dev/sr0 tmp vobcopy -i ../tmp -F 64 -m
This produces a 6.5GB copy which has the protection intact. Burn to double layer DVDR as is.
mount /dev/sr0 tmp vobcopy -i ../tmp -F 64 -m
This produces a 6.5GB copy which has the protection intact. Burn to double layer DVDR as is.
Ubuntu for Android has been the talk of the town for the last few days. What is purports to do is bring a Linux desktop experience to you from a phones docking station. It looks like they’re seriously pushing it for the motorola atrix. Reason being the motorola has HDMI out (something standard on androids nowadays) but it also is one of the few phones with a keyboard and dock. Sounds great, right?
Well, no actually.
For the power user, it’s a lose-lose because the power user probably has a linux desktop. There’s really zero incentive to run ubuntu in the background except to eat your battery and storage because you can’t access it while your phone isn’t docked. For the people that do actually use this sort of device, they probably already own a tablet or a laptop. More on the point, there’s two ports of openoffice to the droid, and there’s google’s own office offering along with document storage in the cloud. The services are already there for android without having to switch out from android to another OS.
The second glaring problem for power users is that you can’t install another Linux on the phone. Say my preference is RedHat’s desktop (which is actually the defacto linux desktop in business). What do I do with a phone with ubuntu installed on it? Not a lot, clearly. What if I want to load another OS on it? Too bad. What if I want to run a second android OS on the phone? Too bad.
That last bit is really important. The way the Android OS is engineered there’s a setting for the screen size (pixel density) to determine which apps run in which modes or which apps are compatible for a given device. What I would do if I were Motorola is instead of entertaining Ubuntu on the phone (which really doesn’t bring any value to the device) I would run a second android on the phone and leverage googles sharing services. The first android would be configured for battery savings (no HW acceleration) and the second android would be configured for speed (all HW acceleration). Dock it and you bring up the second android, which looks just like the first except all the settings are geared towards making it a tablet or desktop.
What about the casual users? Casual users will first notice a several GB partition missing from their phones which supposedly have 16GB of storage. Unlike the dual android setup above, each OS in this ubuntu-phone shit sandwich needs it’s own resources. Secondly they’re going to notice the apps don’t run correctly without pixel density being set correctly. Games, which you would reasonably expect to work nicely and look better on a PC screen simply aren’t going to work because their UI isn’t geared towards a keyboard and mouse. Finally why ubuntu at all? Casual users want either a Windows PC or a Mac. They won’t understand why the resources (documents, spreadsheets, etc) aren’t available in both. Ubuntu can use google’s cloud storage, but Android can’t use Ubuntu-One, which makes the biggest sellingpoint of an ubuntu tablet a liability instead of a bonus.
Finally, why a 2.x series android OS?
Ubuntu needs to stay the hell off my phone.
I wish I was the Anthony Bourdain of technology. William Gibson came close with No Maps for These Territories but it lacked the right vibe. I think it was because of Bono. Anyway, a tech tourist show with a post modern host is something I would love to do when I retire. Drink beer with project managers, wear dark sunglasses, smoke their clove cigarettes and just jam out with the platform. Who cares if it crashes? Shitty local bands provide the backdrop in the smoke filled bars as we just brainstorm cool stuff over the local deep fried dish.
We’re at the cusp of a new revolution, we’re at the cusp of the old revolution. Sun Microsystems said 10 years ago “the network is the computer”. We saw the tadpole notebook die. “Who would run UNIX on a laptop?”
Who would run Linux on a cellphone?
Solaris and Linux are old news, crushed under the terribly irony of their own success and android is really the dragon risen. Solaris did oracle so well that oracle bought it and made it from a wonderful garden into a toaster. Linux continues on as a desktop, or a server, but Nokia championing a full Linux on a phone never materialized with any success. Google came in the night and hammered it into a legitimate mobile platform by throwing out the trappings of the OS while keeping the enterprise level Java ideas (something lost on Apple) and what do we have now?
We have a mobile app delivery platform. Mark my words, this is huge. It’s so big Ubuntu has been pretending it was their idea and completely missed the boat. Protip guys – your desktop browser doesn’t belong on a tablet. But who’s to say thats not going to change into a mobile browser through theming? Who knows what goes on in their heads at all?
The new internet isn’t going to be made from webpages, it’s going to be made from applications. There’s two particular forces here which caused this: HTML is junk and the OS always got in the way. Cellphones, today, aren’t that different from enthusiast computers of yesteryear. They play games, they capture video, they play music, they take pictures and they do it wirelessly. Think about that for a moment. To make a computer do this, you need a webcam, you need a graphics card, you need fast storage and you need sound. The computers biggest problem was that it didn’t come with these things. Windows always insisted it needed updates, driver disks, etc. The problem was always the OS. Apple ended up going way over the mark and also branded the heck out of an OS. People line up around the block for it, it’s completely bizarre to me. Apple too misses the mark, applications you purchase on your device don’t work on your computer. In a lot of ways apple did these things, but I also think they priced themselves back into the enthusiast market. There’s no reason to use them when only a small minority of people have these features. We were missing the important part of the puzzle – Cheap, Complex Devices.
You get cellphones which do these things for free. Sign a two year contract, pick the carrying case color of your choice and out the door you go. It plays video. It captures video. It’s a camera. Its an audio recorder, a music player, and it surfs the web, all wirelessly.
Why doesn’t it do this on the web? HTML.
HTML has made awesome strides over the years for client side execution, the problem is that the clients have made great strides over the years not to be the executors. Phones are still ARM9, 1ghz (if you’re lucky) devices with processors the size of your thumbnail. While things like terga have gone a long way towards specific work units (nVidia’s GPU, etc), there’s no hardware accelerator for HTML. It just doesn’t exist. Dalvik, of course, is hardware accelerated. See what google did there? Instead of using a presentation language for applications and trying to accelerate that, full well knowing MS was going to stomp them to death with IE, they accelerated the language you can write your presentation layer in. Suddenly, the camera, the audio, the recorder, and the phone don’t require a stack of driver disks. They have very elegantly end-run the presentation problem by making the OS go away. They catered to the idea that no-one cares what their phone runs. A phone is a phone the same way a TV is a TV or a car is a car, except when your TV can play on your phone because of the netflix app and your car gets it’s maps from google via bluetooth. People don’t care, per se. It’s a phone. Netflix is a neat trick. Netflix keeps your eyes on the phone. How many ads does Netflix run? Zero, you pay for it. How many ads does the CNN app run? A lot. You pay for the service with ads like TV, but instead of channels now you have apps.
This is uninspired, insipid horseshit. It doesn’t change how we do things, it merely reassigns TV channels to applications. Instead of tuning to the channel, you click a button. All that’s done is make the phone into the remote control and the TV, or you could think of it as a TV without a remote control. It’s boring. It’s the thing legislation is made of to “protect the rights of consumers” because the MPAA and the RIAA don’t get it, they don’t come up with new ideas.
Here’s an idea, take the device and do something with it. Make an Autozone app which lets you pay for a mechanic to connect to your phones camera so you can show him where you’re stuck on the project. They can sell you special phone soap when you’re done because you didn’t wash your filthy hands before touching the display. Make a social network app which lets you define public content you will share with people in an area, then walk around the building with your GPS on to define an area people will exchange info with you. Enjoy the particular vibe of a movie or song? Why not a music player which correlates where people hang out to enjoy the music? People who list their activities as “sports” and listen to upbeat jazz while running a route may have a route which attracts amateur athletes who don’t enjoy complex terrain but want to run for fitness outside.
What we have driving this is association. When we coalesce these different technologies into a single platform, we need to realize that they stop being technologies unto themselves and they can be used in a complimentary fashion. At very least I am surprised that banks haven’t set up internet tellers. Not only do people enjoy talking to tellers, etc face to face but in terms of verifying the security of the account, seeing the customer (and having a picture of them) is worth the security alone. The customer feels like the bank takes a personal stake in them, the queue can be managed by the application instead of standing in line at the bank and the bank has strong identity verification. Phones can scan barcodes too, have them hold their drivers license up to the camera to be scanned for another layer of security just in case they’re some sort of Max Headroom puppet.
The internet presentation is dead, its bones pave the way for the new internet presentation. The future always feels like it’s right around the corner.
This entire rant spawned off this post.
I don’t think of myself as an evangelist for open source software but having been balls deep in android the last few days has given me insight into a lot of the problems facing android as an open source project. Recently one of the XDA guys was pinched for piracy in his ROM and it really came to a head.
The background we have to consider is that the ROMs themselves are thoroughly grey market. The phones can run on Linux, but the drivers for the hardware are typically closed source and non-free. Finding the license to them is impossible, as is finding the source to the drivers. We’re put in a situation where it’s not particularly clear what is and is not GPL at this point – the android OS has a monolithic kernel except for the cellular portion which might as well be a preemptive kernel which overrides the running kernel when the phone rings. The problem is the GPL was largely designed to keep the kernel and the drivers free and open, and as an extension of The nVidia Problem (shim loaders) the kernel is no longer free and open on the android. This is a really old argument and has been beaten to death. The android kernel has shim loaders and we have to live with it. The big rub then is how do we legally develop for the android? We are protected by the DMCA and allowed to reverse engineer (clean room) the drivers because no other legal route is provided, but the solution of the day has been to grab the drivers and firmware from other releases of android and pull them into the future. In essence, shimming the shim loaders. Performance, as you can guess, is hit or miss. In the SGH-T959’s case, it’s a real mess when it comes to the GPS. Where it goes from annoying to dangerous is the problem that AOSP ROMs don’t have valid, working call routing. To get the call routing to work, you have to use the samsung proprietary drivers. This is obviously a contention between de facto piracy and public safety as call routing is what makes E911 work.
More recently this whole issue of presentation came to a head with neobuddy’s ROM. Aside of his worst sin of loving anime, he “rebranded” some commercial software in an effort to add spit and polish to the ROM. MIUI has done the same with their ROM but everyone sort of gave it a pass being Chinese. MIUI also occupies a grey market area where they charge (on the chinese side) for enhanced features. It hasn’t made it weirdly enough to the US version of the site but I’m waiting for it to happen. MIUI therefor has been the stalking horse of ROM developers where they pick bits and pieces of it. Better to steal from the Chinese than steal from a company like Samsung. Neobuddy took it one step further and rebranded several apps where (ironically enough) the author installed his ROM and took offense. jrummy not only admonished neobuddy for piracy but posted that the official version from the market had bugfixes people were complaining about.
Now, this is a particularly sticky wicket. On one hand, it’s morally wrong to put your name on someone elses software. On another hand, jrummy’s application is a ROM flasher which needs those drivers discussed earlier to work if not in some very indirect way. Finally there is no google market in india. Neobuddy lives in Mumbai, he couldn’t have gotten the software legally. (Krishna – correct me if I’m wrong). This is the intersection of aspiration and incompetence. All fine and understandable, except that when XDA asked Neobuddy to show them the sources, he added “kernel rolled by nelson” to his thread instead of linking to a git of the linux kernel. Sigh.
Fast forward a week while XDA discusses what the heck it means to post your sources and finally they just throw their hands up and lock the thread. I don’t blame them. This immediately balkanizes the community. Neo starts a thread on facebook (which as a software git is useless) and the XDA guys get thrown under the bus. 12 hours after getting his thread locked, someone posts a mirror of the ROM and had removed the software he put his name on and posted a link to the git of the kernel sources. Neo eventually makes this particular ROM the official working ROM and removes the other links, which firmly cements us in meta-piracy land.
The commentary on facebook has been really contentious and it’s hard to understand why people would put effort into a pursuit like this. Fortunately it’s easy enough to roll a kernel that the cyanogen assholes don’t have a monopoly on it, but it seems people have a weird opinion that ROMs live above the ecology of software. In reality the OS is subservient to the application, which is one of the reasons why I haven’t jumped onto ice cream sandwich yet. It doesn’t even run facebook, but people seem to want the latest and greatest from google while disregarding the fact that everything has to be done through the browser until the apps catch up. Commentary on facebook was largely “THE XDA GUYS ARE MEANIES YOU CAN DO WHATEVER YOU WANT”. Nevermind that if this ROM actually went legal (how?), someone else might exercise their moral imperative to change Neo’s name and rebrand the ROM for themselves just as he did with the software. To that end, most of the people posting on the facebook group had no idea that rebranding the software was in fact illegal as was burying the kernel source on some back woods git no-one would have noticed and certainly wasn’t linked to.
To his credit, neo removed the offending rebranded software and has separate download links. The kernel source they’re using is now posted to a git. XDA put the thread back up and most of the people on facebook haven’t figured out how the rest of the internet works so XDA remains safe. It looks like we’re going to get some Nexus S backport of ICS, which is nice. jrunner has not nuked Mumbai.
Pay attention to the date on this post. I get a surprising amount of hits to past android bitchings and fixes I’ve written on. If the post is more than a month old, consider rooting around in the sources to find an updated procedure.
REQUIRED READING: Rooting Froyo. That also gives you clockwork.
With the release of Ice Cream Sandwich, everyone has been wondering which phones are going to get it. The bottom line is T-Mobile said the 4G Samsung Galaxy S is going to get it, they said it was too slow on the 3G.
This is crap. We all know this is crap. It’s the exact same phone with a different radio chipset. Neither Samsung nor T-Mobile is creative enough to come out with a totally new phone that looks like a totally old phone.
That being said, kernel.org is back up and Ice Cream Sandwich AOSP is out along with the Gingerbread AOSP. Gingerbread, predictably, is hacky crap and we knew that from the HTC releases. AOSP simply confirms it started out as hacky crap and didn’t advance much because of Ice Cream Sandwich.
The guys over at XDA Devs already have the AOSP of Ice Cream Sandwich ported to the SGS-T959. GPS works, radio works, video works, cameras work, but the battery life is abysmal. Its incredibly encouraging to see this much work being done in this short of a time, which means that the original Vibrant clearly didn’t fall far from the mainstream tree. You can follow the thread here. One big caveat – this is active development. Version 7 of the alpha is already out. It’s a real pain in the ass to install. The original post hasn’t been updated despite being the most commented post on XDA devs that I can remember. Handle with care.
I upgraded to Fedora 16. This laptop started out on FC14 and now is running FC16. I’ve managed to do it without a re-install.
Gnome is steadily improving, in a direction I am not interested in going. The UI continues to be unappealing, the lack of themes and configuration for keybindings (which continually have my hacks reset) makes Gnome worthless trash for a serious Linux user. The theming community produces the most hilarious amounts of bitching with comments similar to “Oh killer theme, how do I downgrade GNOME so I can use it?” Even the hacks don’t carry over. This continues on with GNOME into FC16 and it’s clearly GNOME’s problem.
KDE on the other hand, seems to get broken somewhere between FC16 and FC16’s upgrade to the newer 3.1 kernel. (3.1.0 versus 3.1.1). At least part of this is due to the intel graphics chipset in my laptop taking a huge crap on the kernel upgrade, so X didn’t come up correctly the first few times. The fix follows to be run as root:
Start KDE in safe mode, disable compositing, let it settle. The stupid neopumk or whatever it’s called semantic desktop service will cause the machine to chug. Disable that trash since it can’t be uninstalled and your desktop will be a lot faster.
Post configuration crap – all your network settings will be gone. Your desktop theme will most likely be gone. However, if you got bit by the KDE not working correctly after upgrade bug where KIO kills itself, then your fix is here!
Before you try anything here – this guide does really bad things to the atapi bus. If your computer suddenly reboots and hasn’t written consistent files due to a bus crash, don’t blame me or the utilities mentioned in here. Just buy a computer that doesn’t suck or buy a USB drive.
I want to buy someone a beer.
I have had a hell of a time with my kid biting everything (including his tongue) and that includes DVDs. I know I’m not alone – DVDs which come from Netflix are great examples of CSI work. You know the previous guy has little kids about the same age because you can look on the DVD and get their dental records.
I’ve talked about this before, but kids trash media. Not only do they trash media but the media itself tends to have copy protection which is intentional trash already on the media before your kids got to it. Disney does this to an extreme. The way it looks is a bunch of chapters on the disk which might be the right size and time except they’re filled with garbage. You need the physical copy of the disk. Previously you could use dd on the disk and that’s how I used to do it. I happened to run across safecopy when reading up on disaster recovery stuff for work and wow.
Here’s how safecopy works – it’s very similar to dd where you set the block size to huge and no read retries. The problem with the dd method is that if you have two files spanning the block size (and remember that DVDs don’t really have blocks, so “yes”), you discard the start of the next file. I’ve been getting around it by setting the block size to low for DVDs with “copy protection”, but this gives me plenty of time to make a list of people to kill while I research who came up with this. If the disk is scratched, I set the block size to a larger value (10M) because you know you’re going to hit that same scratch for literally the entire 8.5GB or whatever disk. This generally worked well so long as you didn’t hit the transition of the files.
safecopy changes that entirely.
You run safecopy in passes. The first pass is no recovery past bad blocks, and it skips a lot of blocks. In fact it’s no different from running dd. dd hits a bad block and skips to the next. safecopy hits a bad block and skips to the next and it keeps track of which addresses have bad blocks in a file it’s written. The magic is in the options – safecopy lets you specify a size of blocks to skip in bytes (16 is the default) or percent of size. That second one is the magic one, because block size changes physically as you move closer to the edge of the disk, and the edge of the disk is the part that goes in your little biters mouth. The question is – how many blocks are destroyed under each tooth mark?
How did I use it?
safecopy –stage1 /dev/sr1 /home/knarrj/tmp/damaged.iso
That makes safecopy do a fast pass and write off 10% of the total disk size (8.7GB) to bad sectors when it hits a bad sector. It writes a stage1.badblocks file and makes a note of the addresses it skipped. The ISO there is padded. If you try playing this ISO in VLC or whatever you’ll probably play it a bit and then VLC will crash when it tries to jump to a sector with the content BaDBlOcK. Then follows the magic:
safecopy –stage2 -I stage1.badblocks /dev/sr1 /home/knarrj/tmp/damaged.iso
Cool huh? Now safecopy goes back and reads the disk backwards from the boundary of the amount of bytes that it skipped to find the last, best sector. If we ran stage 3, it would be like dd again and attempt to read every byte on the disk. The first pass is about 15 minutes or so and the second pass is about 45 minutes meaning you can beat teeth marks and structural copy protection in about 1 hour.