Skip to main content

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users.

However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process.

An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies.

However, building high-performing products and solutions that work for the majority isn’t enough anymore. From facial recognition technology that has a bias against people of color to credit-lending algorithms that discriminate against women, the past several years have seen multiple AI- or ML-powered products killed off because of ethical dilemmas that crop up downstream after millions of dollars have been funneled into their development and marketing. In a world where you have one chance to bring an idea to market, this risk can be fatal, even for well-established companies.

Startups do not have to scrap the lean business model in favor of a more risk-averse alternative. There is a middle ground that can introduce ethics into the startup mentality without sacrificing the agility of the lean model, and it starts with the initial goal of a startup — getting an early-stage proof of concept in front of potential customers.

However, instead of developing an MVP, companies should develop and roll out an ethically viable product (EVP) based on responsible artificial intelligence (RAI), an approach that considers the ethical, moral, legal, cultural, sustainable and social-economic considerations during the development, deployment and use of AI/ML systems.

And while this is a good practice for startups, it’s also a good standard practice for big technology companies building AI/ML products.

Here are three steps that startups — especially the ones that incorporate significant AI/ML techniques in their products — can use to develop an EVP.

Find an ethics officer to lead the charge

Startups have chief strategy officers, chief investment officers — even chief fun officers. A chief ethics officer is just as important, if not more so. This person can work across different stakeholders to make sure the startup is developing a product that fits within the moral standards set by the company, the market and the public.

They should act as a liaison between the founders, the C-suite, investors and the board of directors with the development team — making sure everyone is asking the right ethical questions in a thoughtful, risk-averse manner.

Machines are trained based on historical data. If systemic bias exists in a current business process (such as unequal racial or gender lending practices), AI will pick up on that and think that’s how it should continue to behave. If your product is later found to not meet the ethical standards of the market, you can’t simply delete the data and find new data.

These algorithms have already been trained. You can’t erase that influence any more than a 40-year-old man can undo the influence his parents or older siblings had on his upbringing. For better or for worse, you are stuck with the results. Chief ethics officers need to sniff out that inherent bias throughout the organization before it gets ingrained in AI-powered products.

Integrate ethics into the entire development process

Responsible AI is not just a point in time. It is an end-to-end governance framework focused on the risks and controls of an organization’s AI journey. This means that ethics should be integrated throughout the development process — starting with strategy and planning through development, deployment and operations.

During scoping, the development team should work with the chief ethics officer to be aware of general ethical AI principles that represent behavioral principles that are valid in many cultural and geographic applications. These principles prescribe, suggest or inspire how AI solutions should behave when faced with moral decisions or dilemmas in a specific field of usage.

Above all, a risk and harm assessment should be conducted, identifying any risk to anyone’s physical, emotional or financial well-being. The assessment should look at sustainability as well and evaluate what harm the AI solution might do to the environment.

During the development phase, the team should be constantly asking how their use of AI is in alignment with the company’s values, whether models are treating different people fairly and whether they are respecting people’s right to privacy. They should also consider if their AI technology is safe, secure and robust and how effective the operating model is at ensuring accountability and quality.

A critical component of any machine learning model is the data that is used to train the model. Startups should be concerned not only about the MVP and how the model is proved initially, but also the eventual context and geographic reach of the model. This will allow the team to select the right representative dataset to avoid any future data bias issues.

Don’t forget about ongoing AI governance and regulatory compliance

Given the implications on society, it’s just a matter of time before the European Union, the United States or some other legislative body passes consumer protection laws governing the use of AI/ML. Once a law is passed, those protections are likely to spread to other regions and markets around the world.

It’s happened before: The passage of the General Data Protection Regulation (GDPR) in the EU led to a wave of other consumer protections around the world that require companies to prove consent for collecting personal information. Now, people across the political and business spectrum are calling for ethical guidelines around AI. Again, the EU is leading the way after releasing a 2021 proposal for an AI legal framework.

Startups deploying products or services powered by AI/ML should be prepared to demonstrate ongoing governance and regulatory compliance — being careful to build these processes now before the regulations are imposed on them later. Performing a quick scan of the proposed legislation, guidance documents and other relevant guidelines before building the product is a necessary step of EVP.

In addition, revisiting the regulatory/policy landscape prior to launch is advisable. Having someone who is embedded within the active deliberations currently happening globally on your board of directors or advisory board would also help understand what is likely to happen. Regulations are coming, and it’s good to be prepared.

There’s no doubt that AI/ML will present an enormous benefit to humankind. The ability to automate manual tasks, streamline business processes and improve customer experiences are too great to dismiss. But startups need to be aware of the impacts AI/ML will have on their customers, the market and society at large.

Startups typically have one shot at success, and it would be a shame if an otherwise high-performing product is killed because some ethical concerns weren’t uncovered until after it hits the market. Startups need to integrate ethics into the development process from the very beginning, develop an EVP based on RAI and continue to ensure AI governance post-launch.

AI is the future of business, but we can’t lose sight of the need for compassion and the human element in innovation.



from TechCrunch https://ift.tt/3JKFKgN
via Technology

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

Colorado police identified the serial killer who murdered 4 women 40 years ago after exhuming his body to analyze a DNA sample

A scientist examines computer images of DNA models. Getty Images Police in Colorado have cracked the cold cases of four women killed 40 years ago. Denver PD said genetic genealogy and DNA analysis helped them identify the serial killer. He had died by suicide in jail in 1981. DNA from his exhumed body matched evidence from the murders. Police in Colorado have cracked the code on four murder cases that went unsolved for 40 years, using DNA from the killer's exhumed body. The cases pertain to four women killed in the Denver metro area between 1978 and 1981. They were 33-year-old Madeleine Furey-Livaudais, 53-year-old Dolores Barajas, 27-year-old Gwendolyn Harris, and 17-year-old Antoinette Parks. The four women were stabbed to death. Denver Police Commander Matt Clark said in a press conference Friday that there was an "underlying sexual component" to the murders but didn't elaborate further. In 2009, a detective reviewed Parks' case and picked several p...

Gemini vs. ChatGPT: Which one planned my wedding better?

I was all about the wedding bells after getting engaged in June, but after seeing some of these wedding venue quotes, it’s more like alarm bells. "Ding-dong" has been remixed to "cha-ching" – and I need help. I don’t even know how to begin wedding planning. What are the first steps? What do I need to prioritize first? Which tasks are pressing – and which can wait a year or two? I decided to enlist the help of an AI assistant. Taking it one step further, I thought it’d be interesting to see which chatbot – Gemini Advanced or ChatGPT Plus (i.e., ChatGPT 4o) – is the better wedding planner. Gemini vs ChatGPT: Create a to-do list I’m planning on have my wedding in the summer of 2026 – sometime between August and September. Besides that, I don’t have anything else nailed down, so I asked both Gemini and ChatGPT to give me a to-do list based on the following prompt: “My wedding is between August 2026 and September 2026. Give me a to-do list of things to do for the...