Are you the chosen one? 💻[HTC #67]
Welcome to Hold the Code #67! In this week, we’ll explore the notion of the 10x developer, how media disinformation could affect the 2024 US elections, and AI’s evolving role in user research.
Think you got what it takes to be a superstar programmer? Read the full edition to find out. Happy reading!
We’re still looking for contributing articles from you, our readers. Tell us your idea below!
The 10x Developer
Written By: Mark Fortes
The notion of the 10x developer has intrigued and puzzled the software engineering industry for decades. The idea of a developer who can deliver ten times the output of their peers sounds enticing, yet it remains an unproven concept, existing mainly in anecdotes and forum discussions.
Organizations, hungry for efficiency, have long sought these fantastical figures, hoping to accelerate their software development processes. But several questions remain: does the 10x developer truly exist, and is this concept a viable metric for evaluating software teams?
The Origin Story
The emergence of the 10x developer myth can be traced back to the "Coding War Games," a public productivity survey initiated by Tom DeMarco and Tim Lister in 1977. Over 600 developers from various organizations participated in these games, aiming to complete a series of benchmarks in minimal time with minimal defects.
Surprisingly, the choice of programming language had little impact on performance, except for assembly language. Experience, too, showed little correlation with performance, except for those with less than six months of experience in a specific language. Notably, a substantial variation in productivity was found between different organizations. The best organization outperformed the worst by a staggering 11.1 times.
Attempts to pinpoint the traits that define these exceptional individuals have been met with inconclusive results. Attempts to logically tie the 10x concept to best practices, such as adaptability and continuous learning, have fallen short. Some research, like CMU Professor William Nichols' study, has shown that 90% of developers fall within modest performance ranges, potentially debunking the idea that some are inherently more efficient. Software development's inherent complexity nearly always involves dealing with vast and intricate systems, making it difficult to apply a one-size-fits-all productivity model.
Some early studies, such as the 1968 paper by Sackman, Erikson, and Grant, emphasized the vast individual differences in programming performance. However, the tasks involved were specific, mathematical challenges that might not necessarily apply to diverse real-world software scenarios.
Teamwork makes the dream work?
In the modern era of agile methodologies and continuous development cycles, the focus has shifted. Agile approaches demand skills beyond just coding proficiency. Developers need to identify valuable problems, design user-friendly solutions, gather feedback, and create software that genuinely impacts users.
Most importantly, software development is a team effort. Although it definitely varies from startups to corporate behemoths, understanding the broader context of the problems being solved and collaborating effectively within multidisciplinary teams has become increasingly important. Pursuing the idea of a “10x team” may prove to be more fruitful than a single superstar.
AI-Driven Election Threats: What’s Ahead in 2024
Written By: Ian Lei
Enhanced by the wave of generative AI tools launched throughout the past year, media disinformation is a looming problem in the 2024 United States presidential election. A recent poll found that more than half of Americans expect false information spread by AI to influence the 2024 presidential election.
In light of internal investments by large tech companies in election integrity initiatives, what strengthens disinformation’s threat is that new AI companies created lack the capacities to manage election-related risks.
What have been the consequences so far?
Ryan Heath of Axios reported that AI deepfakes that are already posing problems in elections. In the Slovakian election on September 30 last month, a fake video of the defeated candidate buying votes circulated.
Additionally, Heath writes, “Audio deepfakes became a flashpoint” at the U.K. Labour Party's annual conference, when fake audio of Keir Starmer — the poll favorite to become Britain's next prime minister — was circulated purporting to show him bullying staff and criticizing the conference's host city.”
What could be other consequences?
In an article from the Electronic Privacy Information center, the authors Cali Schroeder and Ben Winters, outline possible situations that companies and policy makers should address:
“AI systems and the content they generate can be combined with targeted lists of people and their contact information from data brokers. This would enable bad actors to target financially or otherwise vulnerable groups – like the poor, elderly, minority groups, and more – with content specifically tailored to manipulate them based on fears, stereotypes, or other individualized characteristics…
Bad actors can create and tailor the content of the messages using generative AI to more effectively manipulate different groups and to effectively evade spam filters that would otherwise identify widely repeated messages and prevent some spam.”
What’s being done?
Maria Ressa, a Nobel Peace Prize laureate, along with Camille François, a renowned researcher who exposed Russia's 2016 election disinformation campaign, launched an innovation lab at Columbia University and Sciences Po in Paris. The lab is a component of a digital literacy initiative supported by a $3 million grant from the French government.
A coalition of ten civil society organizations developed a “framework [that] takes a three-pronged approach in its appeal to Big Tech platforms, including recommendations for bolstering resilience, countering election manipulation, and leaving ‘paper trails’ that promote transparency.”
Platforms such as Nooz.ai have introduced functionalities that conduct linguistic analysis on news articles and official documents, aiding users in identifying attempts at manipulating information.
User Research and AI: Striking an Ethical Balance
Written By: Kimberly Espinosa
AI is shifting the ways we design or experience products and services through user research. To better understand how here’s some context.
User Research vs. UX Research
User Research: To understand target users’ behaviors and needs.
UX Research: Analyzes users’ experiences with a product or service.
The distinction of these two research categories, while similar and working together, is important, as UX research falls under the large scope of user research. Additionally, user research can be used for different goals including marketing, whereas UX research is specifically aligned with the design process.
According to the UX Design Institute the following are the 5 most important ethical considerations in UX research (keep in mind this is not exclusive to UX):
Transparency and informed consent
Privacy, confidentiality and data protection
No harm done to anyone involved
Neutrality
Honest and accurate interpretation of results
While these may seem straightforward, it is important to consider how biases can still show up. These biases, which, again, are not exclusive to UX, include confirmation bias, false consensus bias, primacy bias, recency bias, implicit bias and sunk cost fallacy.
Examples of how AI can be used in UX research include:
Identifying users and their behaviors
Identifying common user problems within your focus of interest
Automating transcription and text analysis of interviewing
Providing a list of direct competitors
The State of User Research 2023
User Interviews released their fifth annual State of User Research to learn about how user researchers carry out their practices. An analysis of AI in user research was a point of focus.
Who are the researchers?
Out of a sample of 1,093 researchers, 20% responded as currently using AI in their research, 38% looking to incorporate it in the future, 26% do not use and do not anticipate starting to use and 17% are unsure.
What contributes to the different decisions taken by researchers? The answer lies in considering factors such as diversity, equity and inclusion (DEI), data privacy concerns, and perspectives on AI as a research tool.
User researchers sharing positive attitudes and using AI in their work think that AI has the potential to automate and make the research process more efficient by reducing the “mundane tasks.”
Does AI defeat ethical considerations regarding harm?
Researchers who are actively working toward ensuring their research is more diverse, equitable, and inclusive are less likely to use AI.
For example, a research participant quoted in The State of User Research 2023 thinks that folks new to UX research might mistake AI as a potential tool to replace user researchers and a tool that reinforces biases.
Others share mixed feelings about AI in research:
“I have mixed feelings. I’m excited for certain productivity gains around rote processes, [but feel] skepticism about nuanced analysis [and] concern that there will be an overreliance on AI in UX before it's ready for prime time,” said a research participant of User Interview’s study.
AI’s integration with user research opens possibilities to streamline the overall process. Specific uses, such as data management, can work well.
Whether your interests lie in user research, research in general, or even the ethics of using AI, I encourage you to ask questions about how processes of research have been traditionally practiced and how tools like AI can not only facilitate but make the process more fair.