Is the AI hype turning to backlash?

AI has taken some hits this year, but it is still early days

The scariest movie moment for me was not in films like Alien or The Shining. It was a computer with a glowing red eye saying, "I'm sorry, Dave, I'm afraid I can't do that." That computer, HAL 9000, was as cold, calm, and menacing as any of the worst of movie villains.

The movie is 2001: A Space Odyssey and the scene above is when Dave, the sole human survivor on the spaceship demands HAL to open the ship's pod doors. Prior to this scene, HAL kills one of the astronauts outside the ship, then proceeds to kill three more astronauts in suspended animation.

Scariest movie villain? Agree or disagree?

HAL was a sentient artificial general intelligence (AGI) computer to control the systems of the Discovery One spacecraft. But what drove HAL to go on a murder spree? HAL was forced to contradict his mission leading it to commit atrocities rather than disobey the orders programmed into its code.

Evil super intelligent computers battling humanity has been a long-time cinema cliché from the 1927 film Metropolis to modern films like Ex-Machina, Matrix and Terminator. It touches a nerve within us about the loss of control, insignificance in the face of superior beings, and the extinction of humanity.

Of course, these are only movies and AI would never intentionally harm humans. Or would it?

This is the fear some in the tech community have been worried about for some time going back to Alan Turing. In 1951, he said, “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…we should have to expect the machines to take control.” Since then, people from Nick Bostrom, author of Superintelligence, to physicist Stephen Hawking, to Bill Gates, and many others have been sounding the alarm about the potential dangers AI poses to the human race if not controlled. In fact, Elon Musk co-founded Open AI for this very purpose.

Fear of AI has also been amplified by the Effective Altruism movement that has made controlling AI a goal. They have been the most prominent voices in the regulation of AI leading to a petition calling for a six-month pause on AI development back in March 2023 and supporting legislation from the EU AI Act to California’s SB-1047 bill.

Andrew Ng on why AI regulations focused on the wrong targets.

As urgent the desire for some to slow the creation of super intelligent AI, the tech firms behind AI have not been as optimistic about the dawn of sentient computers. Sam Altman of OpenAI quipped on the Lex Fridman podcast back in March that GPT-4 “kinda sucks”. Google experienced two embarrassing moments. The first was in February when its Gemini AI image generator created inaccurate images of historical figures. Then in May, they launched AI Overviews in their search results providing fun insights like using glue to keep cheese from sliding off pizza, adding rocks to your diet, and suggesting cardio routines like running with scissors.

Anyone for a game of rocks, glue, and scissors?

It seems like the euphoria around Generative AI in 2023 has vanished. In the rush towards all things AI, the wild expectations were knocked down to earth by AI hallucinations, regulatory capture, and skeptics stating that AI craze had already peaked. Furthermore, as more and more VC dollars get plowed into AI startups and NVIDIA passes $3 trillion in market cap to become the most valuable company on Earth, many are starting to say out loud we are in a bubble similar to that of the Dot Com Crash. That includes one partner at Sequoia who noted that there is a $600 billion AI revenue gap with not enough customers to fill the gap and a slowing demand for GPU’s. It’s worth asking the question then, is AI something that consumers even wanted?

Do consumers even want AI? Maybe, but they do want what AI can deliver.

The answer is an unequivocal yes! While still early days, we are currently in the beginning of a twenty-year technology cycle that will lead to greater innovation and opportunity for startups led by AI. Every technoogy cycle has an early spike in activity, attention, and capital, then a cooling down period. But the long-term trend in the growth and impact of AI is inevitable as costs of GPUs decrease, tooling accelerates creation of AI applications, and AI delivers tangilble bottomline and topline growth.

The adoption is not just among tech circles. Interest in AI has been building with consumers according to a study by Oxford and Thompson Reuters that showed 7% of Americans already use ChatGPT daily and almost 20% use a Generative AI product at least weekly.

Usage is still mostly early adopters, but consumer interest is building.

The greatest uptake of AI however has been in the workplace. Matt Wood, VP of AI Products at AWS, shared at the LA AWS Summit that there are 100’s of thousands of AWS customers using AI/ML and over 10,000 customers on Amazon Bedrock, a managed service to access and use a variety of leading LLMs through an API (LLM’s arelarge language models that drive Generative AI).

Matt Wood of AWS as keynote speaker for the LA AWS Summit 2024

Recent surveys also confirm the strong usage of Generative AI with companies from BT Group, Lonely Planet, and PGA to startups such as Leonardo AI, Perplexity, and Theia Insights reaping huge benefits. McKinsey noted 65% of respondents confirmed their organizations were regularly using AI, double the amount from just ten months ago. It’s also spreading with 50% citing adoption across two or more business areas, especially in marketing & sales, product development, and software engineering. And at the recent Yale CEO Summit, 200 CEO’s shared how AI is already positively impacting their companies.

The economic upside of this early experimentation is enormous. Estimates from McKinsey state that Generative AI could add between $2.6 trillion to $4.4 trillion annually to GDP, more than the GDP of Japan! This financial windfall also comes with massive efficiency gains as half of today’s labor tasks is estimated to be automated between 2030 and 2060.

Can AI and humans live together and productively?

With these types of rosy predictions, even the AI doomers might have to step out of the gloom and step into the sunlight of this AI revolution! Well, maybe not so fast.

AI still has a long way to go. Hallucinations are rampant and even dangerous, especially for industries with life and death consequences like healthcare. AI deepfakes litter the Internet with the potential to destroy lives and cause political upheaval. Because LLM vendors have been vacuuming up the entire Internet, including copyrighted materials to train their models without permission, some are now sabotaging AI output through AI poisoning. Plus, there is the constant worry of who is doing what with our personal and private information and how AI apps are using that data.

Maybe the biggest concern is not the AI, but the people behind the AI?

What does this mean for founders as they consider leveraging AI for their startups? It is not enough to build and use AI effectively. Startups also need to consider how AI potentially introduces hallucinations, infringement, plagiarism, toxicity, and vulnerabilities into their products.

This is where AI responsibility can play a role, providing a framework for startups to navigate these pitfalls. Responsible AI refers to the use of AI in ways that are ethical, transparent, fair, and beneficial to society and can be viewed from the following eight dimensions:

  • Fairness & Bias – how AI impacts different populations of users

  • Explainability – how to understand, evaluate & document outputs of AI

  • Safety – how to prevent harmful system output and misuse

  • Privacy & Security – how data for AI is used in accordance with privacy considerations

  • Controllability – how to implement processes that monitor and guide AI behavior

  • Robustness – how to ensure AI operates reliably based on differing or adversarial inputs

  • Transparency – how to communicate about AI to make informed decisions

  • Governance – how to define, implement & enforce responsible AI practices

Much like AI generally, Responsible AI is still nascent. For our part, AWS has taken a proactive approach across our Generative AI stack by putting Responsible AI theory into practice with purpose-built tools, integrating these tools into the AI lifecycle, advancing the state of AI science, and taking a people centric approach to AI in the following ways:

  • Model Evaluation and Guardrails in Amazon Bedrock to allow startups to make informed choices into which LLMs are most suitable for their context and to automatically remove topics and filter content to ensure insensitive content does not reach users. Another example is Amazon Titan Image Generator creating images with invisible watermarks to improve trust in content.

  • Amazon SageMaker Clarify, Model Monitor, and ML Governance help detect bias, track quality & bias drift, and establish controls on AI usage for newly created or customized models as part of the AI workflow.

  • The Amazon Science team is developing cutting edge research into Generative AI, diving deep to solve the hardest challenges in Responsible AI and creating tools like RefChecker to reduce the problem of AI hallucinations.

  • And lastly, we are training for free 2 million people by 2025 in AI skills, creating the AWS AI & ML Scholarship program, offering courses in Responsible AI on Machine Learning University, and actively participating in establishing AI standards with organizations such as the Frontier Model Forum.

We are sometimes asked when is the right time to start using Generative AI, followed by what to build with Generative AI. We believe the time for startups to use AI is now whether improving operational efficiency, accelerating time to market, or enhancing customer experience. But remember to use tools and processes to ensure customers experience trustworthy, bias-free, secure, and relevant AI.

Any nascent and complex technology takes time and significant iterations before its full value can be unleashed. We are at the cusp of that period now with Generative AI. So the question is how will you start using AI today to accelerate the success of your startup?

Our friend and AWS Startup Scout in Southeast Asia (SEA), Arnaud Bonzom, shared a post recently on top quality startup accelerator programs for founders in the SEA region to consider. We are sharing an edited list here and you can find the full list in Arnaud’s original post:

Accelerating Asia (Generalist) by Accelerating Asia Ventures
- Funding & Equity: up to USD 250k for 3%
- Program Fee: USD 35k
- Apply: https://lnkd.in/e2FJGxYn
- POCs: Amra, Craig, Alex

Generative AI Spotlight (AI) by Amazon Web Services (AWS)
- Funding & Equity: no funding, equity free
- Apply: https://aws.amazon.com/startups/accelerators/genai-spotlight-apj
- POCs: Jenny, Lillian, Nicha, Ian

Global Generative AI Accelerator (AI) by Amazon Web Services (AWS)
- Funding & Equity: no funding, equity free
- Apply: https://aws.amazon.com/startups/accelerators/generative-ai
- POCs: Jenny, Lillian, Nicha, Ian

Iterative (Generalist)
- Funding & Equity: USD 150k to 500k
- Apply: https://lnkd.in/eF3WPyq2
- POCs: Hsu, Brian

The Ignition AI Accelerator by NVIDIA
- Funding & Equity: no funding, equity free
- Apply: https://lnkd.in/exrqXag3
- POCs: Benjamin, Li

PearX (Generalist) by Pear VC
- Funding & Equity: USD 250k to 2M
- Apply: https://pear.vc/pearx/
- POC: Pejman

Surge (Generalist) by Peak XV Partners (f.k.a. Sequoia)
- Funding & Equity: up to USD 3M
- Apply: https://lnkd.in/eHSXMeRt
- POCs: Pieter, Aakash, Shailendra

Tenity Singapore (Fintech)
- Funding & Equity: SGD 70k for 2.5%
- Program Fee: SGD 15k
- Apply: https://lnkd.in/eMp7piYQ
- POCs: Jonas, Martin, Charleen

Y Combinator (generalist)
- Funding & Equity: USD 500k for 7%
- Apply: https://lnkd.in/eeW7yAHH
- POCs: Nicolas, Garry, Gustaf

Two other options are Antler and Founder Institute that have a number of programs starting now into the fall throughout Southeast Asia, simply click the links to find locations. If there are accelerator programs that you would like to add to this list, please let us know!

I opened this newsletter talking about the classic scene from 2001: A Space Odyssey, so I had to share this edited version that imagines if HAL-9000 was actually Alexa.

This week and the previous week we have been heads down on content and taking an early summer break. One fun milestone though was finishing up the first season of the Founder Mistakes video series, 21 videos going into the mistakes Mark made during his startup. Next season kicks off next week to dig into the technology mishaps startups commonly make. Tune in and subscribe to the YouTube channel!

Next week Mark will be back out on the road landing in Singapore for a week from June 25 to July 3, so if you are in town and want to meet, let’s get together!