lunes, 24 de abril de 2017

Oracle Blogs: Software Adoption Surveys: From Quantity To Quality!

In a democracy, everyone is equal. One person, one vote, and all votes are equal since all people are equal. 

Erroneously, that approach is applied to software adoption surveys too. Anyone at all can respond to a survey and can indicate that they use software product X, Y, or Z. However, what if I am a student paddling in the shallow waters of software development, with the likelihood that I'll abandon it before I graduate, while you are a senior Java architect creating mission critical software for NASA? Should my vote for product X, Y, or Z be equal to yours? Surely not? 

According to "Baeldung", NetBeans usage has MORE THAN DOUBLED between 2016 and 2017, from 5.9% to 12.4%:


And look here! NetBeans IDE is 2nd, beating IntelliJ IDEA and Eclipse:



And... wow... look at this one, NetBeans is 2nd, after Eclipse, leaving IntelliJ IDEA far away in the dust:



Hurray, NetBeans is waaaaay more popular than X, Y, and Z! 

Sorry, the above results are meaningless, just like the results by RedMonk and the results by RebelLabs. Yes, even though the above results favor NetBeans, the tool that I like and have been promoting, and will continue to promote, for many years, the above results are bullshit. 

When you look at the results of a software adoption survey, you have no idea at all whether the 70% using product A are creating financial software while the 2% using product B are brain surgeons or rocket scientists. And surely that makes a difference, i.e., knowing that information makes a difference in how you would evaluate software adoption surveys. Your software may be aimed at brain surgeons or rocket scientists and so the 70% doing financial software with a competing product are irrelevant to you. 

So, let's stop the meaninglessness of software adoption surveys. Let's instead turn things around and do surveys of projects instead of people. One project, one vote—instead of one person one vote. 

And there'd also need to be distinctions between types of projects. Plus, there'd need to be agreement on which specific projects to survey, with the same survey being done with exactly the same projects over a specific number of years, e.g., 5, to show trends. 

We'd get richly textured results like this: 
In the 1000 commercial projects surveyed over the past 5 years, 40% use Java-based product X, 30% use competing Java-based product Y, while the remainder don't use Java-based products at all. Over the past 3 years, adoption has shifted significantly amongst the 1000 commercial projects surveyed, from product B to product A in the financial sector, while in the logistics sector adoption has remained constant. Meanwhile, Java-based product Z has declined, though not significantly, and remains popular in large enterprises with over 500 fulltime software developers who combine product A with product B. 
90% of the 700 financial projects that have been surveyed use product X, while 75% of the 500 scientific projects that have been surveyed use product X together with competing product Y because product Y provides benefit A in the context of process B which is important specifically to scientific projects because of reason C. In 20% of the organizations surveyed, strict rules are defined about software usage, while 70% leave it up to the developer, and 10% did not respond to this question. 
In all of the 2500 organizations with over 100 fulltime developers surveyed, three competing open source products are used in one way or another. In the aerospace domain, which is 15% of the 2500 organizations surveyed, product A is slightly more popular than product B, while in the educational domain, which is 60% of the 2500 organizations surveyed, product B is clearly dominant, while product C is mainly popular amongst developers at startups, encompassing 20% of the organizations surveyed. 

Wouldn't these kinds of results be far more meaningful than the winner-take-all mentality of one (random) person, one vote—which is the current approach that software adoption surveys take? You'd be able to judge the relevance of the results based on your target audience. For example, you'd be able to say things like: "Well, since we don't care about startups, it doesn't matter that we're not popular there." Right now, we're unable to say these kinds of meaningful things because of the random 2000 people taking part in your survey, they could all be in the same country or working in a similar domain or a majority could be sympathetic to your organization, or came across your survey because they happened to be following you on Twitter, or all working on the same or similar projects. 

Let's stop focusing on the very simplistic "how many people filled in my survey" (e.g, "Last year, 2250 Java developers decided to take the time to answer the questions, and so it's fantastic to see this year that number is almost double - we got 4439 answers.") and switch over to the much more complicated task of "how can I be sure I am getting quality data from my survey". E.g., if you were to ask me "do you use OpenOffice or Microsoft Word", I would probably say "OpenOffice" even though I am using both these products more or less equally (together with other small editors like Notepad and Sublime, but you'll never know that since you never asked about that), though you're not giving me the choice to say that, and my individual usage statistic is meaningless if everyone else answering your survey is just like me or working on the same project or I have persuaded my friends to fill in your meaningless survey or rocket scientists at NASA don't care enough or are behind a firewall and so aren't filling in your survey. 

To be honest, I am not hopeful, at all. I admit to being a bit cynical since I have the feeling that the most important reason for software surveys is not to gain meaningful insights, though I believe you completely when you claim good intentions and I applaud you for what I think you think you're honestly trying to do—instead, the reason why surveys exist is to promote the organization behind the survey as a neutral repository of truth and insight. And the glossier the brochures you print, the shinier the graphs that display your random data, the more suspicious I am that you're marketing your organization rather than gathering meaningful data. 

No hay comentarios:

Publicar un comentario

Te agradezco tus comentarios. Te esperamos de vuelta.

Todos los Sábados a las 8:00PM

Optimismo para una vida Mejor

Optimismo para una vida Mejor
Noticias buenas que comentar