-5.2 C
New York

Generative AI Adoption Is in Single Digits. How Do We Solve This?

When ChatGPT first launched in November 2022, it reached 100 million users in just two months. But if new research is to be believed, those users haven’t stuck around — at least, not for routine use. Only 2% of UK and 7% of U.S. consumers say they’re regularly using generative AI tools like ChatGPT. That’s despite almost a third (30%) of the public being aware of such technology. Clearly, generative AI has an adoption and retention problem.

How Innovation Spreads

The Diffusion of Innovations theory describes how an innovation like ChatGPT spreads gradually through a population and can give insight into why adoption is stalling. Successful innovations become widely used because of five factors:

  1. Relative Advantage: The solution is better than other alternatives in the market.
  2. Trialability: It is easy for someone to experience the innovation first-hand.
  3. Observability: The degree to which others can view the innovation in action or witness its results.
  4. Compatibility: How consistent is the innovation with an individual’s (or organization’s) values? Also covers their past experiences and current needs.
  5. (Low) Complexity: How easy is the solution to use?

As you bring a new innovation or change to an organization, planning your implementation with these five areas in mind can help to speed up a technology’s adoption and increase the likelihood of success.

Make the Value Clear

Every technology leader wants people to feel motivated and excited about using their technology. The easiest way to do this is by proving how a solution will improve someone’s daily life by automating mundane tasks, increasing insights, consolidating disparate information, boosting productivity, making your business more competitive, or opening up new growth opportunities.

Whether you are a leader bringing in new software and hardware to your organization or a vendor offering a unique solution in the market, you need to highlight the advantages of someone using your technology compared to another product or nothing at all.

Get Hands-On

Trialability and observability rely on someone being able to experience your technology. For vendors, offering hands-on experiences at events or during customer meetings can bring a solution to life for a prospect, gaining their buy-in for your product’s features.

For technology leaders bringing a new innovation to their organization, setting up opportunities for stakeholders to test and evaluate the solution can get their buy-in for your plans and uncover champions who’ll drive adoption across your company.

As more people experiment with a solution, they may share their experiences and results with colleagues. This can increase the likelihood of high engagement as soon as a technology rolls out company-wide. There are many experiential options for testing and teaching others about a new solution, including virtual IT labs that offer a non-production, customizable environment where people can practice using technologies and gather results and feedback.

Be Aware of Past Experiences and Perceptions

ChatGPT’s compatibility has been hindered by a lingering distrust of AI and what it ultimately means for our livelihoods and career prospects. A series of high-profile hallucinations and rollbacks of planned features (like Voice Mode) add to the public’s disillusionment with generative AI.

It’s an invaluable lesson for every leader who works with AI or plans to implement it in their organization — plus the creators of such technologies. Innovation can only go as fast as your target audience’s risk appetite allows.

Trust ties closely with the perceived compatibility of a solution. Everyone values meaningful, lasting work and being able to provide for their needs. AI has long been painted as the harbinger of mass unemployment. We need to change that narrative.

As AI advances, leaders need to be ready to have open, sometimes hard, discussions with their people. AI governance will be a key capability in the next decade as we define the boundaries of responsible and ethical use. Effective governance and informed decision-making (including giving consent for AI’s involvement) can only come about when people have the right skills and knowledge to fully understand what’s being asked and what’s at stake.

Teach People to Use Your Technology

Complexity isn’t a weakness for generative AI. Plug in a few prompts, and you’ll get started with using it. For better results, you may need to tweak those prompts a bit and provide feedback to train the model over time to your style, tone, and other humanesque characteristics that label a piece of work as ‘yours.’ Not all AI solutions can boast about their simplicity of use, however, and if your product falls into that category, you need to have a strategy for making it simple for people to learn how to use it.

Investing time at the start of your AI implementation to train your people to the right level can help with technology adoption and retention long-term while also ensuring you’re getting the best results from your chosen AI solution. As mentioned previously, effective training also leads to better AI governance and trust.

Not all training is equal, however. Previously, organizations have relied on knowledge-based learning and assessments (multiple-choice quizzes) that can only upskill people to a certain level. For them to be truly ready for the AI era, you need training that teaches people exactly how to use a technology in your organization for your specific use cases. Giving people safe spaces to practice using an AI tool is the best way to ensure they know how to use it. You can even add a validation element by setting specific tasks or creating scenarios such as a ‘request for data removal’ from a customer that the individual has to complete to be validated at a certain skill level.

Practice in a Safe Space

Broader adoption of AI has been hindered by fear of the unknown and rightfully so in many cases. Execs are unsure of the unintended consequences of setting free such a powerful tool in their organization. Thankfully, there are ways to alleviate these fears.

Organizations can learn and understand how AI will work before releasing AI to their organization by experimenting with AI in safe lab environments. Non-production environments such as labs are ideal for training and testing new innovations in AI because they are removed from live environments and data yet imitate a tool or process as it would work in real life. It reduces the risk of someone accidentally introducing vulnerabilities or causing a data breach while offering the opportunity to build the exact AI skills that the business needs.

From Single Digits to Long-Term Success

The low uptake of generative AI shows that individuals don’t feel bought on or prepared for the AI revolution. That hesitation can cost a business its competitive advantage and long-term growth. Therefore, addressing concerns about AI and increasing the adoption of AI solutions must be a priority for everyone who wants to see AI succeed in the workplace.

Subscribe

Related articles

About Author
Danny Abdo
Danny Abdo
Danny Abdo joined the Skillable team in early 2023 as Chief Operations Officer, focused on commercializing the revenue, product and marketing functions. Prior to Skillable, Danny was the Senior Vice President of Global Business Solutions at Degreed, focused on strategizing the learning and HR solutions and technologies. Before Degreed, Danny served four years as Senior Vice President in Bank of America’s global learning organization, leading a number of teams, functions and strategies including vendor management, employee experience and product management for learning. Prior to Bank of America, Danny served as co-founder and EVP of a global workforce performance consulting firm and led dozens of learning and HR strategy and technology-based initiatives for Fortune 500 companies throughout his career at PriceWaterhouseCoopers and IBM.