Brian Solis's Blog, page 7

February 18, 2025

blooloop – ISE 2025 Keynote: Brian Solis, AI is Eating the World and Why the AI Revolution is Good for Business

Source: blooloop

Integrated Systems Europe (ISE), the world’s largest AV and systems integration show, took place in Barcelona at the Fira Barcelona, Gran Vía from 4 to 7 February 2025. The event welcomed 85,351 unique visitors from 168 countries and filled the vast conference venue with 1,605 stands, an extended content programme, and new areas for both esports and drones & robotics.

This year’s event highlighted industry megatrends, including AI, audio, cybersecurity, retail, and sustainability. With a 20% larger show floor, advanced technologies were exhibited across all eight of the Fira’s halls. Exhibits celebrated the innovation and creativity of the AV and integration community.

“ISE 2025 has surpassed all expectations, setting new milestones in both attendance and innovation,” said Mike Blackman, managing director of Integrated Systems Events.

Keynotes at ISE 2025

AI is Eating the World: Why the AI Revolution is Good for Business

The opening keynote, AI is Eating the World: Why the AI Revolution is Good for Business, was presented by Brian Solis, head of global innovation at cloud-based platform ServiceNow. Solis is a renowned leader in AI integration, digital analyst, and author. He shared his insights on how AI can advance the pro AV and systems integration industry.

“There is no playbook for how to integrate AI in our work,” said Solis. “Which makes it difficult, but also special.”

Arguing for a disruptive approach to AI, Solis shared how the technology can help us to think outside of the box and “unlock the unknown”, going beyond iterative approaches to discover game-changing use cases.

“This is quite literally a ctrl-alt-delete moment,” he said. “That means you have to imagine how to reboot yourself.”

We can shape this, he said, by asking, ‘What would AI do?’ (#WWAID) and using exploratory promoting around the organisation’s key issues to discover new ways of working.

“When AI starts to become magical is where you expect the unexpected,” he said. “Where you do not know what is on the other side of your prompt. And you keep prompting until you come up with something magical.”

Future Panel Discussion – Visions of Tomorrow followed the keynote. In this session, Solis was joined by Quayola, digital artist and creative keynote speaker, Sarah Cox, founder and MD of Neutral Human, and Fardad Zabetian, co-founder & CEO of KUDO, in a discussion hosted by Josephine Watson, managing editor of TechRadar.

Screenshot

 

The post blooloop – ISE 2025 Keynote: Brian Solis, AI is Eating the World and Why the AI Revolution is Good for Business appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 18, 2025 05:56

February 17, 2025

Brian Solis Keynote: Reimagining Retail in an AI-Forward World

Originally posted at RetailSpaces

At RetailSpaces, futurist and digital anthropologist Brian Solis took the stage to deliver a message that cut through the noise of AI and tech trends. While generative AI is reshaping the retail landscape, Solis reminded the audience of a timeless truth: the heart of innovation is understanding people.

The Consumer at the Center

Solis’s work as a digital anthropologist focuses on how technology transforms behaviors—not just for consumers but also for employees. “We spend so much time designing the bridge but forget about the people crossing it,” he explained. This shift in priorities is crucial for retailers navigating today’s hybrid physical-digital landscape.

Drawing from his research, Solis introduced “Generation Novel” (Gen N), a term he uses to describe a consumer behavior shift that transcends traditional demographics. “It’s not about being Gen Z or Millennial anymore—it’s about how technology and global events like the pandemic have rewired all of us.”

Fidgetal is the Future

The blending of physical and digital—what Solis humorously called “fidgetal”—is shaping new consumer expectations. Today’s shoppers demand seamless transitions between their online and in-store experiences. Solis highlighted innovations like real-time AI-powered recommendations, modular store layouts, and augmented reality tools that bring products to life, all designed to keep consumers engaged.

“Whether they’re in a store or scrolling on their phones, today’s consumers don’t see a divide between the physical and digital world,” he said. For retailers, this means designing spaces that cater to both worlds—bridging online convenience with in-person connection.

Impatience is the New Competitor

Solis argued that today’s biggest competitor isn’t another brand—it’s consumer impatience. From Uber to curbside pickup, customers now expect speed and convenience in every interaction. “If it’s not fast or intuitive, they’re gone,” Solis explained. For retailers, this means rethinking everything from parking spaces to checkout processes to meet the growing demand for instant gratification.

The Day One Mindset

Solis closed with a challenge to retailers: embrace Jeff Bezos’s “Day One” philosophy. “Every day should feel like the first day of your business,” he said. “The moment you rest on your laurels is the moment you become irrelevant.”

For Solis, innovation isn’t just about adopting the latest tech—it’s about reimagining spaces and strategies to meet the evolving needs of consumers. As he put it, “The future of retail isn’t just digital or physical—it’s human.”

The post Brian Solis Keynote: Reimagining Retail in an AI-Forward World appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2025 05:57

February 4, 2025

Integrated Systems Europe: Brian Solis’ Opening Keynote to address why AI is good for business

Screenshot

Source: Installation International

 Brian Solis, head of global innovation at cloud-based platform ServiceNow, is set to give today’s ISE Opening Keynote at 13:00 in room CC4.1.

An acclaimed digital analyst, author and visionary, Solis was named a ‘Top AI Leader’ for 2024 by Rethink Retail, recognizing him as a pioneer of AI integration. He is also well-known as an author of over 60 industry-leading research publications and eight best-selling books exploring disruptive trends, corporate innovation, business transformation and consumer behaviour.

His body of research has studied (and predicted) digital’s impact on business and society and has helped companies and industries change and innovate with purpose and positive outcomes.

Solis’ ISE 2025 Opening Keynote is set to be an essential destination for attendees eager to discover why now is the time for business leaders to rethink their organisations for an AI-defined future.

In the talk, he will explore the future of tech development driven by AI, the opportunities and challenges for the pro AV and systems integration business, and the role each of us plays in shaping the future.

He will delve into why automation has become the standard rather than the objective, why augmentation is the key to setting businesses apart, and how leaders can cultivate the mindset needed to unlock the potential of becoming an AI-first, exponentially growing organisation.

Industry pain points such as inertia, business contact loss and outdated systems will also be touched upon and solutions that restore connections and drive progress will be offered.

Immediately after his Keynote, Solis will be joined by Fardad Zabetian, CEO and co-founder of KUDO; Sarah Cox, founder and managing director, Neutral Human; and digital artist Quayola to share their bold visions for the world of tomorrow, including how emerging technologies such as AI, AR, VR and more will redefine how we live, work and play.

The Opening Keynote takes place today at 13:00 in room CC4.1. It is free to attend.

The post Integrated Systems Europe: Brian Solis’ Opening Keynote to address why AI is good for business appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2025 03:04

February 2, 2025

Brian Solis Set to Keynote the 2025 Future of Marketing Conference Hosted by Georgia State University J. Mack Robinson College of Business

Brian is proud to serve as the opening keynote speaker at the 2025 Georgia State University “Future of Marketing” conference on February 28th, 2025 in Atlanta.

The event will embody hundreds of senior marketers, highlight thought leadership from world-class marketing experts and futurists, and recognize excellence in marketing innovation from organizations based in Georgia.

The post Brian Solis Set to Keynote the 2025 Future of Marketing Conference Hosted by Georgia State University J. Mack Robinson College of Business appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 02, 2025 10:37

January 30, 2025

diginomica: AI is eating the world – why the AI revolution is good for business

I’m fortunate to serve as the opening keynote speaker at ISE in Barcelona. To celebrate the moment, my team at ServiceNow secured an opportunity at diginomica for a contributed article that explores the keynote topic, “AI is eating the world – why the AI revolution is good for business.”

Summary: AI isn’t just transforming business, it’s redefining what’s possible (if you lead it!) From automation to augmentation, let’s explore how AI can reshape the future of innovation and value creation. Here’s how…

AI is eating the world – why the AI revolution is good for business

The AI revolution isn’t just another chapter in the history of technology — it’s the beginning of a new story entirely. One where businesses can reimagine what’s possible, rethink value creation for an evolving market, and reshape their future together with customers and employees.

In a world where disruption seems to have become the default setting, AI is a great equalizer. Picture this: a small retailer leveraging AI to deliver personalized recommendations on par with e-commerce giants or a healthcare provider using AI to improve patient outcomes with precision diagnostics. AI is leveling the playing field, democratizing innovation, unlocking possibilities not possible before, and opening doors to growth and agility.

Unlocking new value
What makes this revolution so extraordinary is AI’s ability to help creative and open-minded leaders discover value we never even knew existed. It shifts us from the mundane to the meaningful — from reactive to predictive, static to dynamic, from one-size-fits-all commodities to deeply personal and scalable experiences.

Let’s think about customer experience for a moment. In today’s hyperconnected world, personalization isn’t just a nice-to-have; it’s a competitive edge. AI can process enormous amounts of customer data in real-time, enabling businesses to deliver hyper-relevant experiences at scale. Netflix’s recommendation engine, for instance, doesn’t just keep us binge-watching; it’s the cornerstone of content production and fostering customer loyalty.

At the same time, AI can free us from routine tasks. By automating the mundane — data entry, inventory management, logistics — it allows people to focus on what truly matters: creativity, strategy, and problem-solving. This isn’t about replacing jobs; it’s about elevating them. Teams are empowered to tackle more prominent challenges, leading to more engaged, satisfied employees and customers.

Please read the full article at diginomica! 👈

Please read, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more!

My main list for news, events, and updates, a Quantum of Solis.

The post diginomica: AI is eating the world – why the AI revolution is good for business appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 30, 2025 11:10

January 27, 2025

AInsights: Press Zero for the Future, This AI Operator Takes the ‘Work’ Out of Work

2025 was said to be the year of AI agents and the dawn of agentic AI. I’m just returning from the ServiceNow Sales Kickoff in Las Vegas and from employees to consumers to enterprise, it’s on.

Introduction to OpenAI’s “Operator”

OpenAI is releasing a “research preview” of an AI agent called Operator that can “go to the web to perform tasks for you,” according to the launch post. “Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling,” OpenAI says. It’s launching first in the US for subscribers of OpenAI’s $200 per month ChatGPT Pro tier. It is available to Pro users here.

Before we continue, as you read this article and think about AI agents, juxtapose the word operator with orchestrator. You become the orchestrator and AI becomes the operator.

High-Level Summary of Operator

At its core, OpenAI’s Operator represents a bold step toward making AI more than just a conversational tool—it’s meant to serve as your special AI agent. Operator isn’t just about answering questions; it’s about executing specific tasks with intelligence, speed, and adaptability.

From filling out forms and ordering groceries to generating memes on demand, Operator takes on the repetitive, time-consuming tasks that clutter our digital lives. What makes it promising is its ability to navigate the same interfaces and tools we use every day, albeit disparately, but instead, seamlessly integrating into existing workflows. As such, it introduces new possibilities to give time, and sanity, back to people ready to reimagine how they work in an AI-driven world.

Operator is powered by a next-generation AI model called the Computer-Using Agent (CUA)—an innovation that combines GPT-4o’s vision capabilities with advanced reinforcement learning to navigate and interact with graphical user interfaces (GUIs) just like a human.

Operator can “see” and “act” in a dedicated browser environment. It analyzes screenshots and executes actions via a virtual mouse and keyboard inputs. Operator has ability to self-correct. If it encounters challenges or makes mistakes, it applies advanced reasoning to adjust in real time. When a task requires human intervention, Operator hands control back to the user.

OpenAI is working closely with leading companies like DoorDash, Instacart, OpenTable, Priceline, StubHub, Thumbtack, and Uber to ensure Operator is practical, reliable, and aligned with real-world business needs. These partnerships help refine its ability to execute tasks efficiently, making AI-driven automation a seamless part of everyday operations.

Beyond business applications, Operator has the potential to streamline and enhance public services. OpenAI is exploring how AI can improve accessibility and efficiency in government workflows by collaborating with organizations such as the City of Stockton. This initiative aims to simplify processes like enrolling in city services and public programs, demonstrating how AI can be a powerful tool for improving civic engagement and accessibility.

Here’s what makes Operator so interesting, even in its research form:

Context Awareness in Action – Unlike traditional chatbots, Operator maintains continuity across interactions, making its responses and actions more intuitive and relevant.Multimodal Power – Operator processes text and images via screenshots. It interacts with the web dynamically, clicking, scrolling, and making decisions like a human would.API & Software Integrations – Operator can tap into databases, software tools, and APIs to get real work done.Adaptive Decision-Making – Operator anticipates needs, suggests next steps, and automates processes without requiring step-by-step instructions.Personalization & Continuous Learning – The more it interacts, the better it understands user preferences, optimizing for efficiency and impact.Operator’s “Computer-Using Agent” Model

Operator is powered by a “Computer-Using Agent” model powered by GPT-4o’s vision capabilities with advanced reasoning through reinforcement learning. This means Operator is actively engaging with digital environments in real-time.

Here’s why this is important and representative of the beginning of a new era of AI agents and agentic AI:

Operator Can See – It processes screenshots and visual cues, allowing it to interpret and interact with digital interfaces more like a human. If it gets stuck, Operator will ask for help.Operator Can Act – Using virtual keyboard and mouse actions, it navigates web pages, clicks buttons, scrolls, fills out forms, and executes workflows without requiring custom API integrations.Bridging Human and Machine Interaction – This capability closes the gap between AI automation and human-like engagement with software and web environments.

OpenAI has essentially built an agent that doesn’t rely on proprietary integrations—it works directly within existing digital workflows, making it more adaptable and immediately useful.

AInsightsComparison to AI Agents

The AI revolution has long envisioned intelligent agents—systems capable of operating with autonomy, foresight, and strategic execution. The definition of AI agents includes:

Autonomy: The ability to act independently with minimal human oversight.Proactive Decision-Making: Anticipating needs and making informed choices without explicit prompts.Goal-Oriented Behavior: Working towards defined objectives rather than reacting to queries.Continuous Learning: Improving over time based on interactions and outcomes.Multi-Agent Collaboration: Interacting with other AI agents or humans to solve complex challenges.

Operator is an evolution, not the final form. It enhances automation and intelligence but still requires guardrails, enterprise integration, and predefined rules. It’s a powerful step toward the AI-driven future but not yet the fully autonomous, strategic AI agent envisioned in science fiction.

Why Operator’s Release is Significant

This release matters because it redefines what’s possible with AI today:

Bridging the Gap Between Chatbots and True AI Agents – Operator moves beyond static conversations into real-world, task-oriented execution.AI in the Enterprise – Businesses can deploy Operator to optimize workflows, freeing up teams to focus on strategy and innovation.Operationalizing AI for Real-World Use Cases – This is AI that works, not just responds. Industries from finance to healthcare can leverage it to solve real problems.Building AI Trust & Governance – Operator’s release provides a framework for businesses to deploy AI responsibly while maintaining human oversight.Competing in the AI Arms Race – With advances from OpenAI, Google DeepMind, and Anthropic, Operator positions OpenAI at the forefront of enterprise AI evolution.Conclusion

Operator is an inflection point. It signals a shift from AI as an assistant to AI as an active participant in digital workflows. While it’s not yet a fully autonomous agent, it sets the stage for a future where AI doesn’t just respond—it acts, executes, and collaborates in ways that redefine productivity and innovation.

For more in the enterprise world of agents, please visit ServiceNow’s realworld examples.

Please read, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more!

Please subscribe to AInsights, here.

My main list for news, events, and updates, a Quantum of Solis.

The post AInsights: Press Zero for the Future, This AI Operator Takes the ‘Work’ Out of Work appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 27, 2025 05:12

January 20, 2025

AInsights: How OpenAI’s o3 Model Is Ushering in an AI Reasoning Revolution

AInsights: Your executive-level insights making sense of the latest in generative AI…

OpenAI introduced its o3 “reasoning” model, three months after introducing o1. Interesting fact…the company skipped o2 not because of a technology leap, but instead, copyright issues. Perhaps you heard of O2 in the UK?

The OpenAI o3 model represents a significant advancement in artificial intelligence, positioning as a  reasoning and complex problem-solving model. And no, this isn’t AGI. But it is innovative, pushing AI into a new era of ‘scaling laws.’ More on that in a bit…

Unlike traditional large language models (LLMs) that rely on pattern recognition, o3 introduces simulated reasoning (SR), enabling it to “think” through problems by pausing and reflecting on its internal thought processes. This approach mimics human-like reasoning, making it capable of tackling tasks that require multi-step logic or novel problem-solving.

How is o3 Different from ChatGPT and Other LLMs?

OpenAI’s o3 model differs from earlier ChatGPT iterations and other LLMs in several key ways:

Enhanced Reasoning:

o3 uses a “private chain-of-thought” process to evaluate multiple solutions before responding, improving accuracy in complex tasks like coding, mathematics, and scientific reasoning.

Its achieved benchmark scores are close to or surpassing human-level performance in areas like visual reasoning (ARC-AGI) and advanced mathematics (AIME). 😅

Adaptability:

o3 can handle tasks it was not explicitly trained for by exploring multiple solution pathways and selecting the best one through an evaluator system.

Performance Efficiency:

The o3 model demonstrates a 20% improvement in efficiency over earlier models on technical benchmarks like SWE-Bench for software engineering.

Safety and Alignment:

o3 incorporates “Deliberative Alignment,” a feature that allows it to critically evaluate responses against safety protocols, reducing risks of harmful or biased outputs.

o3 Model Breakthroughs

Program Synthesis for Task Adaptation: O3 introduces “program synthesis,” enabling it to adapt to new tasks by generating solutions dynamically, rather than relying solely on pre-trained patterns. This approach allows the model to solve novel problems effectively.

Natural Language Reasoning: The model uses advanced reasoning techniques, including “chains of reasoning,” which allow it to analyze tasks step-by-step, improving accuracy and reducing errors like hallucinations.

Benchmark Performance: O3 achieved groundbreaking results on benchmarks like ARC-AGI (87.5% accuracy under high compute) and AIME (96.7% in advanced mathematics). These scores demonstrate its ability to generalize knowledge and reason across complex domains.

Efficiency Gains: The model exhibits improved sampling efficiency, meaning it can achieve more accurate results with less data and compute, making it more adaptable and cost-effective compared to earlier models.

Generalization Abilities: O3’s architecture allows it to learn and adapt quickly with minimal examples, mimicking human-like cognitive abilities and addressing limitations of traditional AI models.

The Second Era of Scaling Laws

AI leaders refer to this phase of AI model development as the “second era of scaling laws.” Let’s take a moment to unpack what that means.

The “second era of scaling laws” in artificial intelligence represents a paradigm shift in how AI models are developed and optimized. It moves away from the traditional approach of simply increasing model size, compute power, and dataset size—methods that have driven much of AI’s progress over the last decade but are now showing diminishing returns. Instead, this new era emphasizes architectural optimization, training efficiency, and innovative techniques like test-time scaling to achieve better performance without proportional increases in computational costs.

The second era of scaling laws is important for businesses and researchers because it ensures that AI innovation can continue without unsustainable resource demands.

Inference AI

OpenAI’s o3 model is a representative of Inference AI, the stage where a trained model applies its learned knowledge to new, unseen data to make predictions, decisions, or solve tasks in real time.

OpenAI’s o3 model leverages advanced reasoning capabilities during inference. Unlike earlier models that primarily relied on pattern recognition, o3 employs simulated reasoning and a hybrid reasoning framework (neural symbolic learning combined with probabilistic logic) to actively breakdown complex problems and generate actionable outputs.

O3 also redefines ‘inference AI’ by introducing reasoning as a core component of the inference process. This evolution allows it to: 1) Handle tasks requiring structured thinking and logic, such as diagnostics in healthcare or advanced robotics, and 2) Make decisions that are not just based on pre-trained patterns but are dynamically reasoned out during runtime.

AInsights

The o3 model marks a qualitative leap in AI capabilities, with implications for industries requiring advanced reasoning and adaptability.

It allows for complex problem-solving. For example, o3 excels in areas like robotics, medical imaging, and financial modeling by addressing tasks that involve multi-step logic or novel scenarios.

o3 also mimics human-level reasoning. Tests show that its performance approaches human-level understanding in certain domains, making it suitable for high-stakes applications like scientific research or strategic planning.

So, what will o3 allow you to do differently?

Compared to traditional LLMs, o3 enables businesses to:

Tackle Complex Tasks: Solve problems requiring advanced reasoning, such as optimizing logistics networks or diagnosing rare medical conditions.

Enhance Decision-Making: Significantly enhance your company’s decision-making processes by introducing advanced reasoning capabilities, improving accuracy, and enabling more dynamic, data-driven strategies. This will provide more accurate insights by simulating multiple reasoning pathways before delivering recommendations.

Develop Tailored Solutions: Fine-tune the AI’s reasoning approach for specific business needs, improving outcomes in areas like customer service automation or predictive analytics.

Mitigate Risks: Ensure compliance with ethical standards and reduce the likelihood of biased or harmful outputs

Appendix

The Key Features of the Second Era of Scaling Laws:

Optimization Over Size:

The focus is on refining model architectures and training methodologies rather than merely scaling up resources. This includes designing smarter algorithms and more efficient neural network structures to maximize performance gains.

Test-Time Scaling:

A significant innovation in this era is “test-time scaling,” which allows models to dynamically allocate more computational resources during inference (when answering questions or solving problems) rather than just during training. This approach enhances real-time adaptability and decision-making capabilities.

Efficiency and Sustainability:

The second era prioritizes balancing performance improvements with computational efficiency. This shift addresses the growing costs—both financial and environmental—associated with training massive AI models, making advanced AI more economically viable and accessible.

Diminishing Returns from Traditional Scaling:

Previous scaling laws relied on brute-force increases in compute, data, and model size, which yielded predictable improvements but are now hitting a “compute-efficient frontier.” Beyond this point, additional resources result in smaller performance gains, necessitating new strategies for continued progress.

Please read, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more!

Please subscribe to AInsights, here.

Subscribe to my master mailing list for news and events, a Quantum of Solis.

The post AInsights: How OpenAI’s o3 Model Is Ushering in an AI Reasoning Revolution appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 20, 2025 06:35

January 18, 2025

AInsights: Lights, Camera, AI: How OpenAI and Google Are Reimagining Hollywood 2.0

AInsights: Your executive-level insights making sense of the latest in generative AI…

OpenAI released its initial version of Sora, a text-to-video AI model that generates high-quality video clips based on your text prompts.

Text-to-Video Generation: Sora enables you to produce videos by simply inputting descriptive text, facilitating rapid content development without the need for extensive technical expertise. But, Sora doesn’t work like you might imagine using ChatGPT. Your movie is comprised of storyboards, where each scene is then described with as much detail as you can provide, allowing you to craft the story from beginning to end.

Asset Integration: You can customize your videos by incorporating your own images and video clips.

Storyboard Tool: Storyboards are the foundation of any good storyteller. This feature allows precise control over each frame, aiding in the creation of detailed and structured narratives.

Resolution and Format Options: Sora supports video outputs up to 1080p resolution and offers various aspect ratios, including widescreen, vertical, and square formats, catering to diverse platform requirements.

Subscriptions: If you have a subscription to ChatGPT Plus, Sora is included at no additional cost. You can generate up to 50 videos per month at 480p resolution, with options for 720p. If you subscribe to ChatGPT Pro, you gain access to high-volume workflows. With this plan, you can generate videos with higher resolutions up to 1080p up to 20 seconds. You can also download videos without watermarks.

While Sora represents a significant advancement in AI video generation, it still has limitations. There’s a big difference between the promotional videos we’ve watched and what’s possible today. For example, the current version of Sora faces difficulties with complex scene generation, physics simulations and maintaining image quality and spatial details. OpenAI is actively addressing these issues similar to its approach to its ChatGPT and o1 models.

Here’s an example of Sora at work…

OpenAI Sora – Quick and Dirty Prompt:

An AI movie in there parts…

Please create a video that shows a business executive reading the digital version of AInsights newsletter published by Brian Solis. Start with the executive looking confused and overwhelmed with information overload about AI, but her team is looking for her guidance and leadership. Then she reads AInsights, and all the answers are clear toward innovation!

Storyboard 1: A business executive sits at her polished office desk, surrounded by stacks of papers, open devices, and charts depicting complex AI data. Her expression is one of confusion and overwhelm, as she holds her head in her hands. Her team peers through the glass office door, looking expectantly at her for guidance.

Storyboard 2: The executive picks up a tablet displaying a digital newsletter focused on AI insights. Her eyes scan the screen, and her face lights up with clarity and understanding.

Storyboard 3: The executive stands confidently, addressing her team with newfound insights and innovative ideas, the stress replaced by a calm assurance.

Google Veo 2

Not to be outdone, Google also introduced Veo 2, its advanced ‘state-of-the-art’ AI video generation tool. It’s not widely available yet, but you can join the waitlist over at Google’s VideoFX Lab.

Like Sora, Veo 2 excels in generating realistic videos from text prompts.  What sets Veo 2 apart is, for now, the following…

Cinematic Quality: Veo 2 delivers ultra-high-quality videos up to 4K resolution, with the ability to produce extended scenes lasting several minutes. This level of quality promises to meet the demands of professional-grade storytelling and high-end brand campaigns.

Realism and Detail: By understanding real-world physics and the nuances of human movement and expression, Veo 2 creates lifelike videos that resonate with viewers. This makes it ideal for industries where authenticity is critical, such as healthcare, education, and luxury goods. I’m sure Sora will also offer these capabilities, if it doesn’t already by the time I publish this.

Cinematography Intelligence: Veo 2 interprets detailed creative prompts, enabling users to specify cinematic elements such as:

1) Lens Types: For example, “18mm lens” produces a wide-angle perspective

2) Camera Angles: Prompts like “low-angle tracking shot” deliver dynamic, immersive visuals.

3) Effects: Instructions such as “shallow depth of field” create professional focus and framing.

4) Wide Range of Styles: From technical demonstrations to emotive storytelling, Veo 2 adapts seamlessly to diverse subjects and genres.

This is an example I found online, since I don’t have access to VideoFX yet. It’s an elephant drawing an elephant!

https://briansolis.com/wp-content/uploads/2024/12/MyO9EE6RYdsL7jGp.mp4AInsights

For executives, Sora and Veo 2 Veo 2 represent a new class of AI tools that allow teams to create strategic assets that empower your organization to communicate more effectively. They offer the ability to differentiate your brand and transform your content strategy, build deeper connections with your audience, and maintain a competitive edge in an increasingly visual and digital-first marketplace.

But with it, will come the need for vision and imagination, creativity, expertise and skills, and the ability to tell a story that people will watch, feel, and share. While any one can create a high quality video, like with any form of content, audiences are never guaranteed.

Hollywood vs. Sorawood vs. Veowood

The impact of OpenAI’s Sora and Google’s Veo 2 on Hollywood is profound, potentially reshaping the entire filmmaking process and industry dynamics.

Studios

For studios, AI video generation tools offer both opportunities and challenges:

Cost Reduction: AI can significantly lower production costs, especially in areas like visual effects and post-production.

Rapid Prototyping: Studios can quickly generate concept art, storyboards, and even rough cuts of scenes, streamlining the pre-production process.

Expanded Creative Possibilities: AI enables the creation of complex scenes and effects that might have been prohibitively expensive or technically challenging before.

Disruption Risk: While major studios have the resources to adopt AI, they may face disruption from smaller, more agile competitors who can now produce high-quality content at lower costs.

Directors

Directors will find their role evolving with AI:

Enhanced Visualization: AI tools can help directors better communicate their vision by quickly generating visual representations of scenes2.

Workflow Efficiency: Directors can iterate faster, experimenting with different visual styles and effects in real-time.

Creative Augmentation: AI serves as a tool to enhance, rather than replace, a director’s creativity.

Skill Adaptation: Directors will need to develop new skills to effectively leverage AI tools in their workflow.

Actors

The impact on actors is potentially significant:

Digital Doubles: AI-generated versions of actors could be used for dangerous stunts, reshoots, or even to extend an actor’s career beyond their physical limitations.

Performance Enhancement: AI could be used to refine or alter performances in post-production.

Job Concerns: There are fears that AI could eventually replace background actors or even lead roles in some productions. I need to think more deeply about this…

New Opportunities: Actors may find new roles in voicing or motion-capturing for AI-generated characters.

Screenwriters

Screenwriters face both opportunities and challenges:

Ideation Assistance: AI can help generate plot ideas, character backgrounds, or even dialogue options.

Script Analysis: AI tools can analyze scripts for pacing, structure, and marketability.

Adaptation Concerns: There are worries about AI potentially writing entire scripts, though current technology is far from this capability.

Collaborative Tool: AI is more likely to become a tool that enhances the screenwriter’s process rather than replacing them entirely.

Industry-Wide Implications

Democratization of Filmmaking: AI tools could lower barriers to entry, allowing more creators to produce high-quality content.

Ethical and Legal Challenges: The industry will need to grapple with issues of copyright, actor likeness rights, and the ethical use of AI-generated content.

Job Transformation: While some roles may be at risk, new jobs focused on AI integration and management are likely to emerge.

Creative Boundaries: As AI capabilities expand, the definition of creativity and authorship in filmmaking may need to be reevaluated.

While AI tools like Sora and Veo 2 present challenges to traditional filmmaking roles, they also offer unprecedented opportunities for innovation and creativity. Before disruption takes over an industry, professional and personal growth comes from those willing to first disrupt themselves. The key for Hollywood will be to adapt to this new technology, finding ways to integrate AI that enhance human creativity rather than replace it. As the technology evolves, we can expect a period of significant transformation in how films are conceived, produced, and distributed.

Please read, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more!

Subscribe to AInsights, here.

Join my master mailing list for news and events, a Quantum of Solis.

The post AInsights: Lights, Camera, AI: How OpenAI and Google Are Reimagining Hollywood 2.0 appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 18, 2025 10:44

January 14, 2025

AInsights: Unlocking Executive-Level Insights with Google’s NotebookLM

AInsights: Your executive-level insights making sense of the latest in generative AI…

I was already a big fan of Google’s NotebookLM. Now Google has already introduced compelling upgrades and new features that have me diving back in.

Google’s NotebookLM is an AI-powered workspace that transforms how executives, organize, interact with, and synthesize information. It combines the power of Google’s Gemini Pro language model with whatever information you’re interested in learning more about to create a personalized research assistant and hub.

NotebookLM allows users to upload up to 50 diverse information sources per notebook, including documents, presentations, PDFs, and websites. For example, I uploaded individual chapter of my book, Mindshift, press coverage, articles I’ve written about it, and podcasts. NotebookLM then analyzes all the content to create summaries, identify key topics, and generate insights. The platform offers a unified interface for managing sources, chatting with AI, and creating new takes on the content provided. Each research project or topic becomes its own ‘notebook.’

Key features include:

Intelligent analysis of uploaded documents, including charts and images.

Question-answering with inline citations.

Collaborative sharing and editing of notebooks.

Generation of podcast-style audio discussions about the content.

Creation of study guides, outlines, and briefing documents.

Interactive Mode!

Alright. That’s great. But for me, the excitement comes with the ability to now interact with the audio overviews, which were already provided in the form of an engaging podcast. Now, you can interact with the hosts!

For those who don’t know, NotebookLM offers a very cool audio overview that isn’t just simple narration of the information. It’s presented in a podcast format where two hosts discuss the information in a fun, relatable, and engaging way. You’d be hard pressed to know it was AI if you were simply listening to the audio. What’s new here is “interactive mode,” where you you can raise your hand and ask a question.

The hosts will pause and call on you. Once you ask the question, they’ll respond with personalized information based on the content in your notebook.

Here’s an example using my notebook that dives into the information shared in Mindshift Chapter 2!

Brian Solis · Mindshift Chapter 2: GoogleLM Interactive Mode Demo 12/24

Imagine creating notebooks of synthesized information for your teams to understand new information at scale. Whether it’s a handful of people or scaled across dozens or hundreds or even thousands of people, each individual can interact with the AI to understand information and learn in their own (fun and engaging) way.

Redesigned Interface

NotebookLM features an updated layout organizes NotebookLM into three distinct panels:

1) Sources Panel: Manage and reference all central information for your project.

2) Chat Panel: Engage in conversations with the AI about your sources, complete with citations.

3) Studio Panel: Create new content, such as Study Guides, Briefing Documents, or Audio Overviews, either individually or collaboratively.

NotebookLM Plus

Google will now offer a premium subscription offering designed for power users, teams, and enterprises. Benefits include increased limits (e.g., five times more Audio Overviews, notebooks, and sources per notebook), customization options for the style and tone of responses, shared team notebooks with usage analytics, and enhanced privacy and security features. NotebookLM Plus is available through Google Workspace and will be included in Google One AI Premium starting in early 2025.

Customizable Audio Overviews

I love this! You can provide specific instructions to AI hosts, guiding the focus of discussions to particular topics or tailoring content for specific audiences. This customization allows for a more personalized and relevant audio experience.

AInsights

The point of AInsights isn’t just to provide the latest in AI insights for busy executives, it’s also to explore potential applications of each new innovation.

So, let’s explore killer use cases of Notebook LM for executive leaders!

Strategic Planning: Consolidate market research, competitor analyses, and internal reports to inform decision-making. (I have a team of researchers and we’re going to experiment here!)

Meeting Preparation: Quickly synthesize key points from multiple documents to ensure thorough preparation.

Trend Analysis: Identify emerging patterns and trends by analyzing various industry reports and news articles. I’m training my notebooks on the tools and methodologies shared in Mindshift to teach others at scale!

Competitive Intelligence: Process and synthesize vast amounts of market data to generate comprehensive competitive insights.

Executive Summaries: Transform lengthy documents into concise, digestible formats for quick understanding.

Project Management: Create centralized notebooks for specific projects, keeping all relevant documents and discussions in one place.

Regulatory Compliance: Generate comprehensive compliance frameworks by analyzing regulatory requirements and internal policies.

Sales Enablement: Build and maintain dynamic sales playbooks that adapt to market changes and customer needs.

Research and Development: Synthesize technical knowledge from research papers and experimental data, creating living documents that evolve with ongoing projects.

Customer/Client Proposal Development: Streamline the creation of targeted and compelling client proposals by analyzing past successful bids and client requirements.

Please read, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more!

Please subscribe to AInsights, here.

My main list for news, events, and updates, a Quantum of Solis.

The post AInsights: Unlocking Executive-Level Insights with Google’s NotebookLM appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 14, 2025 07:59

January 9, 2025

AInsights: The AI Assistant You Can Call Anytime

AInsights: Your executive-level insights making sense of the latest in generative AI…

OpenAI recently announced the acquisition of the URL chat.com to access ChatGPT directly, saving you an extra three keystrokes! TY! See what I did there? I saved an extra six keystrokes instead of typing out “thank you!”

But wait, there’s more. If you don’t feel like using a browser, an app, or a mobile app to collaborate with generative AI, OpenAI has introduced a toll-free phone number for you to call in your prompts!

That’s right, you can now call 1-800-ChatGPT (242-8478), without the need to have an account.

The ChatGPT call-in number is a toll-free hotline that allows users to interact with ChatGPT directly over the phone. This voice-enabled feature provides real-time responses to your inquiries, combining the latest advancements in conversational AI with the simplicity of a traditional phone call.

AInsights

I’ve been thinking through noteworthy applications.

If you believe AI should be about versatility, accessibility, and innovation, then offering multiple ways to connect with ChatGPT—whether through an app, a browser, or now a phone call—makes perfect sense.

1-800-CHATGPT isn’t a replacement; it’s meant to be an enhancement. It brings conversational AI closer to people, meeting them where they are—not just where AI happens to live.

Here are some scenarios where this call-in number could transform your workflows:

On-the-Go Strategy Planning:
Need to run through a decision tree while heading to your next meeting? Call ChatGPT to brainstorm scenarios and key insights in real time.

Team Collaboration & Meeting Prep:
Dial in with your team to summarize reports, refine a pitch, or generate action items for an upcoming project.

Customer & Vendor Insights:
Use the service to practice conversations, prepare for negotiations, or get advice on framing communication strategies.
Executive-Level Q&A: Ask about trends, draft messaging, or explore new ideas for the business—all through a quick, voice-driven interface.

Instant Ideation:
Dial in for brainstorming sessions while commuting or during team meetings. ChatGPT can help spark creativity, refine ideas, or even draft messaging on the fly.

Quick Problem-Solving:
Call for immediate answers to complex questions, from business strategies to technical troubleshooting, without needing to search or type.

Team Collaboration:
Use it during meetings to summarize points, explore alternative solutions, or generate action plans.

Learning & Development:
Access conversational coaching, role-playing scenarios, or even quick explanations of industry trends and concepts.

Efficiency Gains:
AI now fits seamlessly into busy schedules, making information retrieval and ideation immediate and mobile.

Team Enablement:
Your teams can access powerful AI capabilities without needing technical expertise, enabling real-time collaboration and support.

A Glimpse into an Evolving Future of Work:
This is another step toward an AI-enabled workplace where advanced tools adapt to how—and where—we work.

Please support my new book, Mindshift: Transform Leadership, Drive Innovation, and Reshape the Future. Visit Mindshift.ing to learn more! 🙌

Please subscribe to AInsights, here.

If you’d like to join my master mailing list for news and events, please follow, a Quantum of Solis.

The post AInsights: The AI Assistant You Can Call Anytime appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2025 17:52