I imagine there are a lot of ways a Migration installation on Win 8 could fail, but just to share my fix…
I did a Windows 8 Upgrade from Windows 7, and about 30% into the Migration phase, this error:
Setup has failed to apply migration data.
… Followed by a rollback to Windows 7.
This error tells you nothing – very frustrating. Look in the logs located in C:\$WINDOWS.~BT\Sources (surf around in there and you’ll find the log file that applies, possibly in the \Migration folder).
In my case, I noticed an entry that pointed to a Migration Error – it couldn’t move my desktop.ini file in my Music folder. It was looking for C:\Users\bpotter\Music\desktop.ini and couldn’t find it, failed the migration, and triggered a rollback.
I deleted the desktop.ini file (I don’t really care if I lose the folder view settings for my Music folder) and the install and migration completed successfully.
It’s been a little while since the last Christmas project, so it was time to do something again.
This year, I started out with the following criteria for the Christmas for the City display:
- Something interactive
- Something that gets more fun as more people join in
- Something that lets groups of people compete
The weekend before, my wife came home with a Nestle model train she got at a toy sale for $20:
This was great – trains and Christmas go together, and if the train could be driven by some input, we have the beginning of an interactive display. My plan at this point was to mount a series of industrial buttons, and wire them to a Netduino. The faster more kids pushed buttons, the faster the train would go.
However, I was running a little low on time at this point, so I needed something a little simpler. I was thinking of what might be possible while I was watching TV, and noticed my Kinect sitting under the TV. That was perfect. The more motion in front of the depth camera, the faster the train goes.
Using the Kinect library, I wrote a small app that grabbed each camera frame and passed it into an AForge.NET motion detection library. This gave me a motion analysis similar to this picture, where red areas represent “things that have moved in the last second or so”.
By counting the number of “motion pixels” each second, I ended up with what I coined to be the “motion number”. This could be 0 to 150,000 to 600,000 depending on how much motion was happening. I translated this into a percentage based on the max value seen since application start. So now we have a “motion percentage” from 0-100%. This will more or less serve as the value we need for the train throttle.
Mounting the Train
Trains, sensitive electronics, and anything else not bolted down or secured don’t last very long in front of kids, so in order to keep the train from being kicked, shoved, punched, or stolen, it needed some kind of enclosure.
For this I used some Bosch Aluminum Profiles – this stuff is amazing, we use it for just about everything. Our awesome distributor we use at work fabbed/cut pieces for me in a day and then it was off to the assembly area – the area between the couch and the dining room table.
This is also the point where I found out our new puppy hates portable drills..
OK, so one problem.
I set up the Nestle train and… it didn’t work. Either the engine or the controller didn’t work. Still not sure what the issue was, but I had come too far now to abandon this one.
This was Saturday. I had the rest of the night and Sunday to finish this thing. I called The Train Loft in Winston-Salem Saturday evening. They were closing in 15 mins but Jeff stayed late so I could come get another train. They had an amazing display – not something I was going to attempt:
Anyway, Jeff hooked me up with a Polar Express train set, and tricked up the track a little bit with a figure-8 instead of just the oval. Project officially over-budget at that point, but hey, Polar Express and Christmas go together, right?
Controlling the Train Throttle
What I thought was the easy part actually became the hardest. Turns out that train voltage is weird in a lot of ways (this train used AC, not DC), and controlling it via a computer is not that easy. I looked at various options for this, even trying to use an AC dimmer limited in software as to not overload the train. This got risky quickly, so I opted for a more simple mechanical control on the actual train controller. I pulled out the Lego Mindstorm NXT programmable controller and modified the train controller handle to attach to NXT…
While it looked cool, the “conveyor track” had a lot of slippage in it. Had to modify it further to couple the servo motor directly to the controller, and then I was in business.
Making it into a Game
So, now I had a “motion percentage” to use as a throttle value, a way to control the train via software, and a mounting rig.
Now, I just had to make it into a game. So I wrote an app with this workflow:
- Plays a Polar Express intro clip with the instructions (“move your body to make the train go”)
- Gives a 3, 2, 1 countdown
- For 30 seconds, enables Kinect, plays a Polar Express theme song while the train runs, displays the “motion percentage” in 0-400mph (yes, not really to scale, but hey…), and displays the “distance traveled” by the train
- After 30 seconds, stop train and display high scores (how far the train went in the 30 seconds)
I wrapped all that up, and our awesome Christmas for the City volunteers helped set up the rig at the convention center. Here are some pics of what it looked like:
Overall, this worked out great. It was a self-contained rig most of the time running one “round” every minute. Kids participated in nearly every round in the 6-hour period during the event. We had groups of kids competing, groups of adults competing, and a few “die-hards” with sweat pouring down their faces, and most of all a ton of smiles.
One of my favorite parts of the night was when one kid came over and played a few rounds, and then went to see Santa across the room and asked him for “a Polar Express dance game just like that one”.
All in all, I think I achieved the initial objectives, and combined some great hardware components together in a short amount of time!
Just for fun, I tested Big Buck Bunny being served through IIS Media Services from the local disk, running on an Amazon EC2 Micro instance (the smallest available). This is a fairly lightweight server, i.e. something you would never want to run in a production environment. Consider this the streaming equivalent of “Will it Blend?”…
The results are in – when it’s going well, it’s surprisingly great, and when it’s not going well, it’s terrible.
Total Chunks: 2691
Chunks Tested: 2691
Manifest Download Time: 310ms
Avg Video Chunk Response Time: 357ms
Avg Audio Chunk Response Time: 61ms
Excellent Chunks: 590 (21.92%)
Good Chunks: 1821 (67.67%)
Warning Chunks: 68 (2.53%)
Degraded Chunks: 6 (0.22%)
Poor Chunks: 206 (7.66%)
Stream 0: video, 2962kbps, 2056kbps, 1427kbps, 991kbps, 688kbps, 477kbps, 331kbps, 230kbps
Stream 1: audio, 128kbps
Things go OK around 89% of the time. But when they don’t, it crashes and burns…
Yikes. You can almost feel the Silverlight players freaking out.
If you are deploying a Smooth Streaming infrastructure, you already know it’s all HTTP-based. Requests for little video chunks hit your web server, and your web server looks up the correct video “chunk” within the audio/video file and serves it up.
However, it’s can be difficult to get a good benchmark on how your infrastructure is doing at serving up chunks, especially when your Silverlight clients are seeing random buffering errors or you run into scaling problems.
First off, there is a lot of information available from the Smooth Streaming Health Monitor app – In a couple seconds you can have a trace of what decisions the Silverlight Adaptive Streaming Media Element is making and export that out to Excel.
But when you just need comprehensive chunk data on all bitrates to diagnose how your origin/CDN is doing, I made this app (almost called it “Chunker”):
Enter the manifest URL of the on-demand smooth stream you want to test (note that this does not currently support live or composite manifests), for example: http://server.com/streams/BigBuckBunny.ism/Manifest
Once you click Begin Test, a new test tab will open and start requesting chunks based on the manifest information. The results will tell you if you may have a problem with your disk IO on your origin or some other problem preventing chunks being delivered in a timely manner.
Note: Only the first 1000 bytes (almost 1K) of each chunk is downloaded. The point here is not to test bandwidth, but rather test your infrastructure’s performance as it relates to reading/seeking fragments and assembling chunks.
Hopefully based on the assessment of each chunk, you can get an idea of how your CDN / origin / standalone box is doing at delivering chunks.
Run Smooth Stream Performance Testing Tool (ClickOnce)
About a month ago, I made a decision to pull Live Smooth Streaming support from our Nimbus product offering. It is our most requested feature by far. Nearly every account signup wants to live stream, and who wouldn’t?
I’m not crazy. Problem is, Live Smooth Streaming is the Cinderella of content delivery, and she’s stuck at home.
A little background – This “smooth streaming” concept kicked off somewhere around 2009 (MSFT’s first Smooth Streaming event I believe was the French Open). To get in the game, you had to have some crazy hardware, most likely a couple dedicated encoders running $20K a piece. Add to that 10mbps of upstream bandwidth to support it, and you’ve got a behemoth (yep, I said behemoth) of flying bits to deal with. Cinderella rides in a really, really expensive carriage.
Then, in 2010, Microsoft released Expression Encoder 4. The gates of heaven opened, and no longer did you have to have a $20K encoder – you could now do it with a piece of software that costs in between “free” to $200. This is where we came in. We provided the publishing points, origin services, and CDN – everything you needed to deliver your live event to the world.
Folks started using it, and streaming events from corporate webinars in the USA to operas in Russia. While it was okay for the most part, we found in a consistent stream of support issues:
- While most folks had hardware good enough for single-bitrate streams (good enough for Flash, Windows Media, UStream), it was by and large insufficient for Smooth Streaming. Trying to overtax the hardware produced weird results with audio/video sync and reliability issues.
- Connection recovery wasn’t pretty.
After a number of months with these issues, I pulled it. That begs the question, what now? I think the answer lies in encoding.
There is no question that adaptive streaming is the future. There is and will continue to be a big difference between the bandwidth you have at work, at home, on your mobile device, and soon, in your car. There’s a big difference between the size of your iPhone screen and your 60” living room smart TV. Media must adapt to your device and network before we can move toward phasing out television and toward interactive experiences on a wide scale.
This is what makes Smooth Streaming the Cinderella of live video on the web; the most beautiful solution, sitting at home mopping the floors while stepsister Flash is off to the ball. I’m not knocking services like UStream and Livestream.com – they’re good services; but just using older technology.
Smooth Streaming will remain an option reserved only for those with huge budgets for as long as 100% of the encoding burden is on the client. The bar is too high for this technology.
At MIX ‘11 in Vegas, the Channel 9 team did an incredible job of producing and streaming the event. They rented fiber at $500/hr and a bank of encoders. These are not normal people. This is a huge corporation with a high-profile event. It looks great, but the bar is 5 miles high in the sky.
Non-mainstream sports. Concerts. User groups. Weddings. Church. Webinars. Corporate training. Just a few examples of live events that have huge potential for higher-quality engagement that will keep deferring to other technologies until the bar is lowered. In other words, normal people are going to be happy with Cinderella’s stepsisters until the carriage is affordable.
So that’s what we’re going to do next. We’re moving encoding “to the cloud”. In a couple weeks we’ll push out the first beta. With the burden of encoding moved to the datacenter, we can push as much quality as your connection can handle, with a high quality stream on normal hardware. This is the first step in making a superior adaptive-streaming technology accessible to the events that can benefit from it.
Come on Cinderella, hop in.
So this past Friday, the CEO of Expensify, David Barrett, started off the weekend by ticking off 97.2% of the .NET community with a “Why we don’t hire .NET programmers” blog post.
This goes on from time to time. This weekend it’s .NET vs. whatever language(s) that Expensify prefers; other times it’s network equipment like Cisco vs. HP vs. Netgear vs. Dell, followed by Mac vs. PC and, let us not forget iPhone vs. Droid. Technology racism, if you will.
No one is ever correct in generalized, vague discussions, especially when talking technology platforms. Like driving on a highway, you can cover a lot of distance in a short time, but you really haven’t spent any time in the towns you just blew through.
The technology of choice depends on the work at hand.
Case in point: This past week, I went to work. I sat down on my desk, where I have a PC and a Mac.
I connected to our Cisco wireless access points, connected to our HP and Dell switches.
Mixed bowl, huh.
I love swimming in such a soupy mix, because it makes me a better solution provider when I have more than one tool on my belt – and it gives me respect for each vastly different approach that these tools take.
The CEO of Expensify ultimately compared .NET to “cooking in a McDonalds kitchen” – trying to make the point that folks who use .NET are sheltered from low-level details (a “real kitchen”).
“None of this makes you a “bad programmer”. All these differences are perfectly irrelevant if you just want to make 1.6 oz burgers as fast as possible, and commit the rest of your career to an endless series of McDonalds menus. But every day spent in that kitchen is a day NOT spent in a real kitchen, learning how to cook real food, and write real code.”
I guess the C# I wrote last week was fake code. I guess I wrote it for a fake customer. I suppose the check they’ll send me is fake. Does the PHP count?
You see, a good cook can make a great meal – er, real food – in a McDonalds kitchen. Or over a campfire. Or at a steakhouse.
The point I wanted to blog about so that I don’t forget is this: There is a huge difference between a vague and specific technology criticism. The question we’ve got to ask, that changes the game is:
“In what context, and in what example?”
“Macs are better than PCs.” In what context? (Certainly not ERP. But certainly print graphics.)
“iPhone is better than Droid.” In what context? (Certainly not turn-by-turn navigation. But certainly videoconferencing.)
… and back to Expensify:
“Again, this isn’t a rant against .NET — it’s fine. It’s not even a rant against .NET developers being incapable of learning — they can learn as well as anyone. It’s a rant against .NET teaching the wrong things for startups.”
In what context, and in what example? What wrong things? And whose startup?
If you’re unable to answer that question, you may as well be arguing Why You Don’t Hang Out With People Named David.
Make your choice for the job at hand. Deliver great solutions, in whatever kitchen you cook.
I’m learning a few things about the iPad 2 as we try to stretch its limits.
The situation is this: As a simple “remote broadcast” setup, we want to connect FaceTime to a regular TV (with composite inputs – you know, the Yellow / Red / White jacks), so that we can see the remote person on the TV. Seems pretty simple, with iPad 2’s display mirroring features, right? Well, not exactly.
(If you’re a nerd, the “TV” in this equation is actually a video capture card; we’re piping FaceTime into a live encoder and broadcasting it to a live stream. Think CNN Skype interview-style. A reasonably-priced capture card is no different than an older model TV with no HDMI input, with S-Video and Composite inputs.)
So, on to the adventure. Apple has a few different connectors for the iPad/iPhone/iPod Touch that mirror the display or provide a TV out in some form or fashion. Let’s bring out our contestants:
Apple 30-pin to Composite AV Cable (the yellow / red / white thingies)
This would naturally be the most direct approach, but unfortunately, doesn’t work. This doesn’t mirror, but rather functions as a TV Out display. You can play a video over it, but does not provide display mirroring. When using the FaceTime app, there is no signal.
Goodbye, Composite AV Cable.
The direct approach doesn’t work, so if you’re like me, you’ll just take the next best option and try to convert it to Composite:
Apple Digital AV Adapter (the HDMI cable)
This works reasonably well as a display mirroring solution. I connected it to a Samsung LCD TV, and the picture seems a little overexposed, but it does the job and mirrors FaceTime on TV via HDMI. One major drawback to our eventual old composite TV destination:
Your HDMI display/conversion device MUST support HDCP HDMI. So in this case, if you’re thinking all you need to do is convert HDMI to composite, most HDMI converters don’t support HDCP. This one just doesn’t output any signal.
Alas, we have a solution.
VGA adapter does, indeed, mirror FaceTime on iPad 2. And, you can scan convert it to Composite.
All things considered, the “mirroring” functionality in iPad 2 works great in HDMI (mostly as advertised except when Hollywood’s legal people get in the way), but this was a fun little trip through the iPad Accessories aisle.
I imagine this will become a popular solution as FaceTime reaches more devices and more iPads get turned into presentation platforms, so I hope this post will save a little time and a little headache if you’re on the same journey. Happy Mirroring!