https://www.eiu.com/n/campaigns/ai-from-experimentation-to-implementation/
Summary and Review of "AI: From Experimentation to Implementation?"
By E. Serry
Artificial Intelligence (AI) has transitioned from an experimental phase to one of increasing adoption and implementation, driven significantly by advancements in generative AI. This shift has profound implications across industries, offering both opportunities and challenges. The report explores the deployment of generative AI in various sectors, its impact on democratic processes, and the sustainability issues associated with its use.
Democratisation
of AI Through Generative Tools
The launch of generative AI models like
OpenAI's ChatGPT has revolutionised the accessibility of AI, allowing
businesses to harness its capabilities for diverse applications. Generative AI,
leveraging large language models, facilitates complex text and multimedia
analysis. However, while generative AI is attracting significant investment,
non-generative AI still constitutes 90% of corporate AI applications.
Businesses are progressively moving from proof-of-concept to scaling generative
AI solutions, provided they address its inherent limitations, such as
hallucinations and errors.
Generative
AI in Business and Industry
Generative
AI is being utilised to enhance operational efficiency, foster innovation, and
improve customer service. These applications align with broader digital
transformation goals across sectors. Key use cases include:
- Operational Efficiency: Generative AI enables productivity gains
and cost reduction through optimised processes. In technology sectors, AI
accelerates software development and streamlines internal workflows.
- Innovation: Generative AI fosters innovation by simplifying the analysis of
research data, as seen in sectors like energy and healthcare. It enables
the creation of tailored customer experiences and advanced problem-solving
tools.
- Customer Service: AI-powered chatbots improve customer
engagement across industries. In automotive manufacturing, companies such
as Mercedes-Benz and Renault have implemented AI-driven chatbots for
customer assistance and marketing campaigns.
Sector-Specific
Applications
Generative
AI’s impact is evident across multiple industries:
- Automotive: Companies such as Volkswagen and Kia use voice-enabled AI
assistants for in-vehicle operations, while Renault employs conversational
AI for advertising campaigns.
- Consumer Goods: Retailers like Sainsbury’s and Walmart have deployed AI tools to
streamline operations and enhance customer interaction.
- Energy: Generative AI supports innovation in oil exploration and
operational efficiency, with examples like Shell's partnership with
SparkCognition to optimise subsurface imaging.
- Financial Services: Banks like JP Morgan employ AI for
advanced financial analysis and cash flow management, showcasing the
technology's potential to automate complex tasks.
- Healthcare: AI expedites drug development and improves healthcare delivery,
with initiatives like WHO's chatbot, Sarah, providing real-time health
information.
Challenges
of Generative AI
Despite
its potential, generative AI poses significant risks. Notably, hallucinations
in AI outputs can lead to misinformation and reputational harm, as demonstrated
by Air Canada's chatbot errors. Addressing these challenges necessitates robust
oversight, rigorous testing, and the development of ethical AI frameworks.
Generative
AI and Elections
The
political sphere has not been immune to the influence of generative AI. With
its capacity to produce vast quantities of content at minimal cost, generative
AI has become a potent tool in electoral campaigns. Its implications are
particularly significant in democracies with polarised electorates and
fragmented information ecosystems.
- Characteristics of Vulnerable Democracies:
- Free and fair elections
provide a fertile ground for AI-generated misinformation.
- Polarised societies are
more susceptible to the proliferation of fake content that exploits
divisions.
- Fragmented media
landscapes facilitate the spread of disinformation, especially through
social media platforms.
- Case Studies:
- The 2024 US presidential
election illustrates the susceptibility of polarised democracies to
AI-generated propaganda. Foreign interference by nations like Russia and
China further exacerbates these risks.
- Slovakia's 2023
parliamentary election underscores the potential for AI-generated
deepfakes to sway public opinion during critical periods.
Sustainability:
A Growing Concern
As
AI adoption accelerates, sustainability challenges emerge, particularly the
energy demands of generative AI systems. The International Energy Agency (IEA)
estimates that global electricity consumption by AI-driven data centres could
double between 2022 and 2026, equating to the energy consumption of an entire
country like Germany.
- Regulatory Responses:
- The European Union has
implemented measures such as the Energy Efficiency Directive to monitor
and mitigate AI's environmental impact.
- In the US, legislative
efforts like the Artificial Intelligence Environmental Impacts Act aim to
address these challenges, although progress remains slow.
- Industry Responsibility:
- Organisations must
balance the benefits of AI adoption with its ecological footprint by
integrating renewable energy sources and optimising resource utilisation.
The
Future of AI Implementation
The
evolution of AI is an ongoing process, requiring realistic expectations and a
focus on scalability. While artificial general intelligence (AGI) remains a
distant prospect, current AI applications do not need perfection to deliver
meaningful benefits. However, human oversight and ethical considerations will
be pivotal in shaping AI's trajectory.
Review of the EIU Report
1.
Overgeneralisation of Use Cases
The report provides examples of generative AI
applications across industries but lacks nuanced insights into sector-specific
challenges (Marcus & Davis, 2019). For instance, while the automotive and
healthcare sectors are mentioned, they omits the operational difficulties faced
by smaller firms, such as data readiness or cost barriers (Vinuesa et al.,
2020).
2.
Insufficient Exploration of Non-Generative AI
Although the report acknowledges that 90% of AI
usage involves non-generative AI, it fails to delve into the comparative
strengths and weaknesses of classical AI and generative AI. This skews the
discussion towards a single technology and misses an opportunity to present a
holistic view of AI adoption (Goodfellow et al., 2016).
3.
Lack of Quantitative Evidence
The report references energy consumption and costs
associated with generative AI but does not provide detailed datasets or
methodological transparency. Quantitative studies, such as those by Strubell et
al. (2019), could have bolstered its claims regarding the environmental and
financial impacts of AI systems.
4.
Ethical Considerations Addressed Superficially
While ethical risks like misinformation and bias
are acknowledged, they are not deeply explored. Floridi and Cowls (2019) argue
that addressing these issues requires a robust ethical framework, which the
report does not provide. For instance, the implications of biased AI-generated
content in political campaigns are merely mentioned without actionable
insights.
5.
Limited Focus on Practical Implementation Challenges
The report discusses the strategic potential of AI
but does not adequately explore operational hurdles, such as integration with
legacy systems or workforce readiness (Gasser & Almeida, 2017). These are
critical factors that can determine the success or failure of AI
implementations.
6.
Underdeveloped Analysis of AI’s Election Impact
The focus on AI-generated content in democratic
systems overlooks potential risks in non-democratic regimes or hybrid systems.
Binns (2018) highlights that AI applications in polarised environments can
exacerbate societal divisions, yet this is only partially addressed in the
report.
7.
Minimal Consideration of Long-Term Sustainability
Although energy consumption and environmental
impacts are mentioned, the report lacks a forward-looking perspective on
mitigating these challenges. Crawford and Joler (2018) argue that understanding
the full lifecycle of AI systems is critical to addressing their sustainability
concerns.
Suggestions for Improvement
- Sector-Specific Deep Dives: Expand the discussion to include unique challenges and success
factors across industries (Vinuesa et al., 2020).
- Balanced AI Coverage: Provide a more comprehensive analysis of both generative and
non-generative AI technologies (Goodfellow et al., 2016).
- Quantitative Evidence: Include detailed datasets to validate claims, particularly
regarding energy consumption and costs (Strubell et al., 2019).
- Comprehensive Ethical Analysis: Explore ethical challenges in-depth, offering actionable
strategies for mitigation (Floridi & Cowls, 2019).
- Operational Challenges: Address practical barriers, such as skill gaps and infrastructure
readiness (Gasser & Almeida, 2017).
- Global Perspectives: Broaden the analysis of election risks to include non-democratic
regimes (Binns, 2018).
- Sustainability Innovations: Highlight emerging technologies or regulatory measures aimed at
reducing AI’s carbon footprint (Crawford & Joler, 2018).
Bibliography
Binns, R. (2018). Fairness in machine learning:
Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness,
Accountability, and Transparency (FAT), 149–159.
https://doi.org/10.1145/3287560.3287583
Crawford,
K., & Joler, V. (2018). Anatomy of an AI System: The Amazon Echo as an
anatomical map of human labor, data, and planetary resources. AI Now
Institute.
Floridi,
L., & Cowls, J. (2019). A unified framework of five principles for AI in
society. Harvard
Data Science Review.
https://doi.org/10.1162/99608f92.8cd550d1
Gasser,
U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE
Internet Computing, 21(6), 58–62.
https://doi.org/10.1109/MIC.2017.4180835
Goodfellow,
I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
Marcus,
G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence
we can trust. Pantheon Books.
Strubell, E., Ganesh, A., & McCallum, A.
(2019). Energy and policy considerations for deep learning in NLP. Proceedings
of the 57th Annual Meeting of the Association for Computational Linguistics,
3645–3650. https://doi.org/10.18653/v1/P19-1355
Vinuesa,
R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., ... &
Nerini, F. F. (2020). The role of artificial intelligence in achieving the
Sustainable Development Goals. Nature Communications, 11(1),
1–10.
https://doi.org/10.1038/s41467-019-14108-y
Whittaker,
M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., ... &
Schwartz, O. (2018). AI Now 2018 Report. AI Now Institute.
Zuboff,
S. (2019). The age
of surveillance capitalism: The fight for a human future at the new frontier of
power. Public Affairs.

