Brian Solis's Blog, page 15

February 28, 2024

AInsights: OpenAI Dazzles with Sora, Sierra AI Makes Customer Service Human, NVIDIA Turns Your PC into an LLM

Created with DALL-E

AInsights: Executive-level insights on the latest in generative AI….

OpenAI’s Sora produces groundbreaking video clips, democratizing and revolutionizing content creation

As you’ve probably heard, or most likely seen, OpenAI introduced its new generative AI video platform, Sora.

Sora can create stunning, realistic high-definition videos from text instructions or a still image as a prompt. It can generate videos in various styles, such as photorealistic, animated, or black and white, up to a minute in length.

Sora is currently limited to a select group of beta testers for evaluation and has not been made generally available.

Eleven Labs also introduced the ability to provide AI-generated sound to further bring the videos to life.


We were blown away by the Sora announcement but felt it needed something…


What if you could describe a sound and generate it with AI? pic.twitter.com/HcUxQ7Wndg


— ElevenLabs (@elevenlabsio) February 18, 2024


On the heels of the Sora announcement, Stability AI previewed Stable Diffusion 3 and opened-up a waitlist to test its performance and safety ahead of its official release.


Announcing Stable Diffusion 3, our most capable text-to-image model, utilizing a diffusion transformer architecture for greatly improved performance in multi-subject prompts, image quality, and spelling abilities.


Today, we are opening the waitlist for early preview. This phase… pic.twitter.com/FRn4ofC57s


— Stability AI (@StabilityAI) February 22, 2024


AInsights

Sora is both wowing and frightening experts. The potential for video creation in every potential application is incredible. At the same time, deepfakes are already causing concern and confusion. OpenAI itself is concerned about potential misuse. The company is developing tools to detect Sora-generated content and plans to include metadata in the outputs for identification purposes.

There is also the wonder and threat of generative AI videos in every imaginable application.

 

If we focus on the positive, because there are negatives and dangers too, Sora will give unprecedented video capabilities to novices and experts alike. Even one-minute clips will introduce stunning, creative, and professional videos into filmmaking, marketing, education, training simulations, gaming and virtual worlds, and entertainment.

https://twitter.com/briansolis/status...

Even though this image for “SORAWOOD” was meant to be playful, AI will rewrite the script, making the future of Hollywood officially in production.

Actor, filmmaker, and studio owner Tyler Perry decided to halt an $800 million studio expansion after witnessing the capabilities of Sora.Perry expressed awe at how AI like Sora could revolutionize content creation, mentioning that he had already used AI in two upcoming projects to avoid lengthy makeup sessions. While recognizing the efficiency AI brings, Perry also voiced concerns about the potential job losses in the film industry due to this technology.

If Mr. Perry were to read this, rather than invest $800 million into a studio expansion, that budget could be retargeted to build the studio of the future, one where AI augments filmmaking. It could all start right now.

Sierra AI puts the customer back in service, making chatbots more conversational, and, human

I’ve known Bret Taylor from his FriendFeed days as web 2.0 was taking shape. We also worked together at Salesforce until he announced his return to his startup roots. I then left to join ServiceNow as Head of Global Innovation.

In addition to joining OpenAI’s board as chair, Taylor and Clay Bavor just introduced their new startup, Sierra.


I'm excited to announce @ClayBavor and my new company, @SierraPlatform, the conversational AI platform for businesses. With Sierra, every company can elevate their customer experience with AI. https://t.co/qP2r29L62m


— Bret Taylor (@btaylor) February 13, 2024


Sierra is a conversational AI platform for businesses that aims to create a new engagement platform for customers. Like websites and mobile apps, Sierra creates a digital environment that contains the customer experience within a conversational domain powered by AI agents.

“Our thesis is really simple. We think that conversational AI will become the dominant form factor that people use to interact with brands, not just for the sort of current trends like customer service, but really for all aspects of the customer experience,” Taylor told TechCrunch.

What does that look like?

Think of the window as a prompt for action. Whether that’s seeking customer service, changing your mobile plan while you travel overseas, making changes to existing bookings, it gives you a connected system to find answers and achieve outcomes through conversations. AI agents are trained not only in conversational engagement, they’re also trained to be empathetic.

Melissa Ziegler, VP of Marketing at OluKai said this about Sierra, “Observing the AI agent respond empathetically to customers, mirroring the approach of our human agents, was astounding.”

AInsights

Just two years ago, no customer would ever say, “I hope I get to talk to a customer service chatbot today.”

But that’s all changing.

We’re entering a really interesting and accelerated time in which we are witnessing the rapid evolution from dumb chatbots that served as the front-end to basic knowledge bases to hallucinating chatbots to incredibly connected and capable AI agents.

This is the beginning of a new wave of AI orchestration and agentive AI services.

I once said that generative AI represents the opportunity to help humanize traditional business transactions. Sierra is built on an intelligent conversational foundation intended to deliver an end-to-end customer experience within a dedicated environment.

Agents can be trained on company identity, polices, processes, knowledge, and even culture and persona. But this presents companies with opportunity and challenges.

If you bring a traditional “service as a cost-center” mentality, you’re likely to miss the point. If you bring a “surprise and delight the customer” motivation, you’ll elevate experiences, satisfaction, loyalty, and growth. But that means you have to reimagine the processes, workflows, a language to deliver outcome-based experiences Sierra is capable of facilitating.

NVIDIA turns your Windows PC into its own LLM with its new local AI chatbot, “Chat with RTX”

NVIDIA released Chat with RTX, an AI-powered chatbot that runs locally on a Windows PC running Windows 10 or higher with the latest NVIDIA GUP drivers.

Different than ChatGPT, Google Gemini, Claude, etc., Chat with RTX analyzes and summarizes data from the files on your system without sending data to a cloud server. This includes your docs, notes, videos, and other personal data.

https://twitter.com/rowancheung/statu...

Chat with RTX also supports YouTube links, which then interprets the content in the video and answers your questions. This is done by pulling from the data from the closed captions file.

NVIDIA describes it as having the ability to leverage retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, users can query a custom chatbot to quickly get contextually relevant answers. Everything runs locally on Windows RTX PCs or workstations, making results fast and secure.

Chat with RTX is available as a free download, and the installer is 35GB.

AInsights

In the previous edition of AInsights, we explored OpenAI’s Sam Altman and his multi-trillion-dollar quest to introduce an AI chip competitor to NVIDIA. Here we see a reversal of roles. Jensen Huang stepping into the software arena and Altman attempting to enter the hardware game.

This is an interesting and promising experiment in localized SLMs (small language models) to personalize your own experiences and make the most out of information you didn’t think to use or information you wish you could use but couldn’t.

I recently read an article because of its headline, “Why It’s So Hard to Search Your Email.” That’s the point. The same is true for local files. It isn’t just about trying to find the right file, it’s about finding, making sense of, and unlocking value from your content.

It reminds me of the new battle for search between Google and companies like Perplexity. It’s the difference between an algorithm and a human algorithm. The latter is based on your data.

We’re at the cusp of personal AI that makes sense of, build on, and helps you optimize and augment, you.

Note that it’s resource intensive, meaning RTX is taxing the system to perform these tasks. So, your mileage may vary. But it is the start of something more profound, personal, and localized.

Please subscribe to AInsights.

Please subscribe to my master newsletter, a Quantum of Solis .

The post AInsights: OpenAI Dazzles with Sora, Sierra AI Makes Customer Service Human, NVIDIA Turns Your PC into an LLM appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 28, 2024 13:09

February 26, 2024

AInsights: The Impact of Generative AI on Business and Operational Transformation and Innovation

Created with Google Gemini

AInsights: Executive-level insights on the latest in generative AI….

I recently reviewed an AI report published by Coatue, a technology investment platform.

In this special edition of AInsights, I wanted to share some of the more notable insights from the research.  They demonstrate the significance of these times.

This is the moment to build AI-first and AI augmented organizations, beyond automation and the digitization of legacy models, processes, and mindsets. There will be disruptors. And, there will be the disrupted. Those in the middle will be riding momentum on borrowed time.

Let’s review the rapid rise and  impact of generative AI.

We Are Only at Day 1

We are at day 1 of AI, and only experiencing one of its coming waves at the moment. Already its trajectory is profound as compared to previous waves including the internet and smartphones.

Coatue, AI: The Coming Revolution

Innovation Cycles are Accelerating

With the rise and shift of each new platform, adoption has been twice as fast in each wave.

Just think about what comes next and then again after that, all of which is coming at us twice as fast and then twice as fast again.

Quick Wins in AI Focus on the Bottom Line While AI Leaders Also Invest in Top Line Growth

Companies investing in AI pilots are already seeing huge efficiency gains, including increases in customer satisfaction. AI will not only have a backend effect, it will introduce growth opportunities.

GPUs Power AI; And We’re Just Getting Started

With the rise of AI comes the rise for compute. GPUs power AI.

There’s a reason OpenAI’s Sam Altman is seeking to raise trillions for silicon. There’s a reason NVIDIA just posted a stunning 265% growth in revenue YoY. Even last year,

NVIDIA was already skyrocketing.

For the 12 months ending 10/31/23, reported revenue was $44.870B, a 57.07% increase YoY.

NVIDIA annual revenue for 2023 was $26.974B, a 0.22% increase from 2022.

Keep in mind that ChatGPT launched toward the end of 2022. NVIDIA’s growth since then is just incredible.

GPUs is the right architecturefor AI because of parallel processing capabilities, scalability, energy efficiency, specialized computing tasks handling, and cost-effectiveness.

GPU Supply Cannot Keep Up With Demand

Demand for GPUs to power AI applications and use cases has far exceeded supply. Aftermarket prices are too, soaring. Expect more new players beyond Sam Altman. It’s reported that SoftBank’s Masayoshi Son is also exploring investments here.

A New Generation of Companies, Apps, and Devices are Coming

We’re shifting from ~15 years of digital transformation and the quest for organizations to become digital-first or technology-first companies to exploring the meaning of AI in their business transformation.

We’re about to see what AI-first companies look like.

In previous technology waves, we were introduced to new businesses that introduced creative destruction to traditional industries or created entirely new products, service, and capabilities as market disruptors.

Think about it this way. The internet, social media, smart phones, they’ve each set the stage for new types of companies that the world didn’t know to expect. The same is true for AI.

Who’s the Amazon or Netflix or Google in an AI-first world?

Who’s the Uber or Airbnb in the emergent landscape of AI-first companies?

New Players Will Introduce New Behaviors and Change the Course of Markets

Iteration helps people do things better, fast, more efficiently, at scale. In this regard, companies that limit AI’s potential to automating existing or legacy processes or models, will miss the S-Curve of innovation.

Innovation unlocks new value that changes behaviors. This is the basic definition of innovation. It enables people to do things they couldn’t do before. And new behaviors make old processes obsolete. This is the basic definition of disruption.

AI-first companies are not those who prioritize automation, cost-takeout, and workforce replacement. They are those that introduce collaboration opportunities between people and machines to introduce new value, changing behaviors in the process.

Companies that only iterate will find themselves on the wrong side of disruption.

AI Will Redesign Organizational Models

As an analyst, I researched digital transformation and how organizations invested in technology stacks and business models for 10 years. And in that time, I observed more digitization than transformation. In my first report, I explored how digital transformation and the “golden triangle” of the internet, mobile devices, and social media represented an opportunity for leaders to reimagine operational and business models toward innovation. Instead, iteration and digitization prevailed.

I don’t believe companies will have a choice with AI in the long-run. Transformation will not be optional.

AI-first companies will undergo so much more than digital transformation, we’ll see complete business and organizational transformation in the quest to become AI-first and AI-optimized. Expect entirely new AI-centric startup models to also reimagine systems and org charts as they change company trajectories forward.

AI Unlocks New Opportunities for Products and Services (and Market Leadership)

In Florence, Italy, I asked executives to think differently as they approach decisions for operational and business model innovation. I encouraged them to consider a new prompt when evaluating threats and opportunities, “WWAID” or “what would AI do?” The answers will help executives explore the unknown, to consider options and new trajectories that wouldn’t have otherwise existed.

We all don’t know what we don’t know. That’s the challenge and the opportunity.

Companies don’t have to reinvent themselves over night. There is a maturity to becoming AI-first. Start with areas of opportunity. Every company will fund internal and external AI startups.

What’s clear, is that AI is the first wave of technological innovation that requires organizations to change at their very core. Successful leaders won’t bolt-on AI to legacy processes, because they know they would only optimize pre-2023 ways of doing business.

Winners will look at creating the 2030 model, then the 2035 model, then 2040, etc., today.

Disrupt or die!

Please subscribe to AInsights.

Please subscribe to my master newsletter, a Quantum of Solis .

The post AInsights: The Impact of Generative AI on Business and Operational Transformation and Innovation appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 26, 2024 05:48

February 20, 2024

AInsights: Fred Wilson Predicts AI’s Future, The Rise of Chief Innovation Officers, Sam Altman Seeks Trillions for AI Chips

Generated by Google Gemini

 AInsights: Executive-level insights on the latest in generative AI….

The AI stack has become foundational to next-gen business transformation

I’ve followed the work of Fred Wilson for the better part of, wow, almost two decades. He’s legend and also influenced much of my work in New York during the rise of Web 2.0 and the mobile web. He recently wrote an article, “What Will Happen in 2024.” In it, he described how AI was driving the “application era of AI.”

“…much like the browser brought us the application era of the web and the iPhone brought us the application era of the mobile device. This is a big deal. While in 2023, everyone was rightly focused on the large language models like OpenAI, Anthropic, Gemini, Llama, etc, we will see new AI-first applications emerge in 2024 that will start to move the focus and the conversation up the stack. And we will see legacy applications embrace AI to make their products better and to remain competitive with the AI-first disrupters.”

AInsights

We could interpret Wilson’s prediction quite literally. With the launch of OpenAI’s GPTs, AI’s first app store is upon us.

If 2023 was an era of AI shock and awe, and chaos and fun!, 2024 will be about those things too, while also putting AI to work as Dave Wright says. But this is where it gets tricky. We aren’t putting AI to work in the way you think AI should work, it’s actually about the transformation of, well, work itself, and also, us.

Here’s why this should matter. All those years of digital transformation were really aimed less at transformation, and more toward digitization. We weren’t really changing businesses to become digital first, we were investing in digital to modernize legacy, analog processes.

I’d argue, many businesses are doing something similar with AI. The priority is automation and cost takeout. There’s nothing wrong with this. In fact, it’s necessary. But you can’t stop there. You also have to explore, with any new technology, especially with AI, what is possible now? In any given moment, what would AI do (WWAID) to be better, drive growth, or create net new value or opportunities?

The real story, perhaps, understated, is that 2024 will the year of shifting to an AI-first mindset.

#WWAID

The hottest job in corporate America is the Chief AI Officer

We knew that senior-level AI roles would appear as a matter of helping organizations navigate the rapid acceleration of artificial intelligence tools and disruptions. Gartner predicts that by 2025, 35% of large organizations will appoint Chief AI Officers reporting directly to the CEO or COO.

Mayo Clinic in Arizona appointed Dr. Bhavik Patel as its Chief AI Officer. NYU Stern School of Business positioned my friend Conor Grennan as its Head of Generative AI.

According to the New York Times, The Equifax credit bureau, the manufacturer Ashley Furniture and law firms such as Eversheds Sutherland have appointed A.I. executives over the past year. In December, The New York Times named an editorial director of A.I. initiatives. And more than 400 federal departments and agencies looked for chief A.I. officers last year to comply with an executive order by President Biden that created safeguards for the technology.

Accenture, a consulting firm, added Lan Guan as chief A.I. officer in September as clients became increasingly interested in the technology. Mark Daley, a computer science professor and chief information officer at Western University in Ontario, took the new position of chief A.I. officer in October.

AInsights

Just recently we were debating the need or relevance of Chief Metaverse Officers. AI is different in that it is presenting net new opportunities for companies to re-evaluate previous digitalization efforts. AI is asking more from companies beyond automation and cost-cutting. AI requires leadership, governance, data integration and blueprint toward augmentation to align with AI’s exponential trajectory.

At the Baker Hughes annual conference in Florence, I introduced the idea of shifting from becoming a digital-first ‘technology company to an AI-First ‘intelligent’ company.

The importance of hiring senior AI executives lies in their ability to navigate and harness and effectively navigate the disruptive potential of AI technology. These executives play a crucial role in dreaming-up and coordinating AI initiatives, ensuring that AI strategies are integrated across departments, and aligning AI efforts with the organization’s overall goals. They also help in managing the cultural and organizational changes that come with AI adoption, communicating the benefits and urgency of AI initiatives, and overseeing the ethical and responsible use of AI. Plus, someone is going to have to work with leadership and HR to assess the gaps and need for evolving skills.

And, if we could just change Chief Artificial Intelligence Officer to Chief Intelligence Amplifier Officer, like Brian Roemelle champions, we would get to us the acronym CIAO instead of CAIO.

Ciao! 😉

Sam Altman is seeking trillions to reshape the business of silicon chips to power enterprise AI

Sam Altman may believe that NVIDIA’s epic run for chips to power emerging AI applications represents an opportunity for new competition. It’s not uncommon to see articles publicize that NVIDIA employees are getting so wealthy that they’re operating in semi-retirement mode.

Altman’s reported effort to reshape the global semiconductor industry could require raising as much as $5 trillion to $7 trillion, according to the Wall Street Journal. Altman is said to be pitching investors from the UAE, SoftBank CEO Masayoshi Son, and TSMC.

Altman’s move doesn’t come without criticism however. NVIDIA CEO Jensen Huang threw some shade at Altman’s venture.

At the 2024 World Governments Summit in Dubai, Huang joined UAE’s AI minister, Omar Al Olama, on stage to explore the future of AI.

In reference to Altman’s quest for trillions, Omar asked Huang, “How many GPUs can we buy for US$ 7 trillion?”

Huang sarcastically quipped, “Apparently, all the GPUs.”

Altman though isn’t content to let his critics get the last word.

In a post on X, he clapped back, “You can grind to help secure our collective future, or you can write Substacks about why we are going [to] fail.”


you can grind to help secure our collective future or you can write substacks about why we are going fail


— Sam Altman (@sama) February 11, 2024


AInsights

Altman’s efforts are motivated by a drive to support the growth of AI technologies and overcome the limitations imposed by the current scarcity of high-performance AI chips. So what does that mean? Intel, AMD are also competing in the GPU space. I have to imagine that Altman is also trying to solve for OpenAI growth and performance constraints much in the same way Apple ended up developing its own chips for its hardware devices. Only a handful of tech companies need the majority of AI chips. It’s reported that Microsoft and Amazon are building their own chips as well. The U.S. Government is also in on the action.

Meta is also investing in silicon, introducing its Artemis AI chips. Depending on who you follow, Artemis is either meant to break away from Nvidia or complement it. If I can read between the lines, Meta is at least trying to reduce its dependency on Nvidia chips.

What’s clear is that AI is driving demand for greater compute. And every company is going to need to solve for this internally or though outsourcing. A new chip war is upon us.

This isn’t as much of a race to generative AI as it is, most likely, a race to general AI.

Please subscribe to my newsletter, a Quantum of Solis.

The post AInsights: Fred Wilson Predicts AI’s Future, The Rise of Chief Innovation Officers, Sam Altman Seeks Trillions for AI Chips appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 20, 2024 05:06

February 15, 2024

IMMR Names Brian Solis in Top 20 List of Must Follow Thought Leaders in Generative AI

IMMR’s Dr. Phil Hendrix announced a short list of “20 Generative AI Must Follow” thought leaders. Brian Solis was named to the list.

👉 Big Picture/Strategy – Brian Solis (ServiceNow).

“The adage ‘Standing on the shoulders…’ definitely applies here. I’m indebted to you and other thought leaders who are guiding and helping the rest of us harness this transformative technology,” – Dr. Phil Hendrix.

Congratulations to

Allie Miller who should be on the list!

Anthony Alcaraz

Armand Ruiz

Charlene Li!

Conor Grennan!

Ethan Mollick

Gianni Giacomelli

John Sviokla

Kevin Petrie

Lisa Palmer

Marshall Kirkpatrick!

Paul Baier

Paul Roetzer

Phil Fersht

Reuven Cohen

Rodney Zimmel

Sahar Mor

Sanjeev Mohan

Scott Belsky

Ted Shelton!

The post appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 15, 2024 22:04

February 13, 2024

AInsights: MultiOn and Large Action Models (LAMs), Introducing Google Gemini and Its Version of Copilot, AI-Powered Frames, Disney’s HoloTile for VR

A whole new AI world: Created with Google Gemini

AInsights:   Executive-level insights on the latest in generative AI….

MultiOn AI represents a shift from generative AI that passively responds to queries to actively participating in accomplishing tasks

The idea is this, tools such as ChatGPT are trained on large language models (LLMs). A new class of solution is emerging to focus on streamlining information, processes, and digital experiences by integrating large action models (LAMs).

MultiOn AI is a new class of tool that makes generative AI actionable. It leverages generative AI to autonomously executive digital processes and digital experiences in the background. It operates in the background of any digital platform, handling tasks that don’t require user attention, It’s aim is to reduce hands-on step work, helping users to focus more on activities and interactions where their time and attention is more valuable.


When AI becomes a LAM or Large Action Model.


Watch this demo by @sundeep presenting @MultiOn_AI to @Jason, where genAI shifts from a passive response system to executive queries that autonomously execute and accomplish tasks. pic.twitter.com/kdIATmhFx0


— Brian Solis (@briansolis) February 10, 2024


MultiOn is a software example of what Rabbit’s R1 is also executing through a handheld AI-powered device.

AInsights

Beyond the automation of repetitive tasks, LAMs such as MultiOn AI, can interact with various platforms and services to execute disparate tasks across them. It opens the doors to all kinds of cross-platform applications that will only mature and accelerate exponentially.

For example:

Ordering Food and Making Reservations: Users can instruct MultiOn AI to find restaurants and make reservations.

Organizing Meetings: MultiOn AI can send out meeting invitations automatically.

Entertainment Without Interruptions: MultiOn AI can play videos and music from any platform, skipping over the ads for an uninterrupted experience.

Online Interactions: MultiOn AI can post and interact with others online.

Web Automation and Navigation: MultiOn AI can interact with the web to perform tasks like finding information online, filling out forms, booking flights and accommodations, and populating your online calendar.

Google Bard rebranded as Gemini to officially (finally) take on ChatGPT and other generative AI platforms

Google’s Bard was a code red response to OpenAI’s popular ChatGPT platform. Bard is now Gemini and is officially open for business. While it’s not quite up to ChatGPT or Claude levels, it will compete. It has to.

Google also introduced Gemini Advanced to compete against OpenAI’s pro-level ChatGPT service. It is also going against Microsoft and its Copilot initiatives.

The new app is designed to do an array of tasks, including serving as a personal tutor, helping computer programmers with coding tasks and even preparing job hunters for interviews, Google said.

“It can help you role-play in a variety of scenarios,” said Sissie Hsiao. a Google vice president in charge of the company’s Google Assistant unit, during a briefing with reporters.

Gemini is a “multimodal” system, meaning it can respond to both images and sounds. After analyzing a math problem that included graphs, shapes and other images, it could answer the question much the way a high school student would, according to the New York Times.

After the awkward stage of Google Bard being Bard, let’s not forget “bard” is a professional storyteller, in definition. Google recognizes this it has to compete for day-to-day activities, beyond novelty.

There are now two flavors of Google AI. 1) Gemini powered by Pro 1.0, and 2) Gemini Advanced powered by Ultra 1.0. The latter will cost $19.99 per month for access via a subscription to Google One.

Similar to ChatGPT-4, Gemini is multimodal, which means you can input more than text.

I hear this question a lot, usually in private. So, I’ll just put it here and you can skip over it if you don’t need it. Multimodal refers to the genAI model’s ability to understand, process, and generate content across multiple types of media or ‘modalities’, including text, code, audio, images, and video. This capability allows Gemini or other models to perform tasks that involve more than just text-based inputs and outputs, making it significantly more versatile than traditional large language models (LLMs) that primarily focus on text. For example, Gemini can analyze a picture and generate a text-based description, or it can take a text prompt and produce relevant audio or visual content.

Gemini can visit links on the web, and it can also generate images using Google’s Imagen 2 model (a feature first introduced in February 2024). And like ChatGPT-4, Gemini keeps track of your conversation history so you can revisit previous conversations, as observed by Ars Technica.

Gemini Advanced is ideal for more ‘advanced’ capabilities such as coding, logical reasoning, and collaborating on creative projects. It also allows for longer, more detailed conversations and better understands the context from previous prompts. Gemini Advanced is more powerful and suitable for businesses, developers, and researchers,

Gemini Pro is available in over 40 languages and provides text generation, translation, question answering, and code generation capabilities. Gemini Pro is designed for general users and businesses.

AInsights

Similar to what we’re seeing with Microsoft Copilot, Google Gemini Advanced will be integrated into all Google Workspace and Cloud services through its Google One AI premium plan. This will boost real-time productivity and up-level output for those that learn how to get the most out of each application via multi-modal prompting.

AI-powered AR glasses and other devices are on the horizon

2024 will be the year of AI-powered consumer devices. Humane debuted its AI Pin, which will start shipping in March. The Rabbit R1 is also set to start shipping in March and is already back-ordered several months.

Singapore-based Brilliant Labs just entered the fray with its new Frame AI-powered AR glasses (Designed with ❤ in Stockholm and San Francisco) powered by a multimodal AI assistant named Noa.

Not to be confused with Apple’s Vision Pro or Meta’s AR product line, Frame is meant to be worn frequently in the way that you might use Humane’s AI Pin. Oh, and before you ask, the battery is reported to last all day.

Priced at $349, Frame puts AI in front of your eyes using an open-sourced AR lens. It uses voice commands and also capable of visual processing.

Noa can also generate images and provide real-time translation services and features Frame features integrations with AI-answers engine Perplexity, Stability AI’s text-to-image model Stable Diffusion, OpenAI’s GPT4 text-generation model, and speech recognition system, Whisper.

“The future of human/AI interaction will come to life in innovative wearables and new devices, and I’m so excited to be bringing Perplexity’s real-time answer engine to Brilliant Labs’ Frame,” Aravind Srinivas, CEO and founder of Perplexity, said in a statement.

Imagine looking at a glass of wine and asking Noa the number of calories in the glass (I wouldn’t want to know!) or the story behind the wine. Or let’s say you see jacket that someone else is wearing, and you’d like to know more about it. You could also prompt Frame to help you find the best pricing or summarize reviews as you shop in-store.

Results are presented onto the lenses.

AInsights

The company secured funding from John Hanke, CEO of Niantic, the AR platform behind Pokémon GO. This tells me that it’s at least going to be around for a couple of iterations, which is good news. At $349, and even though I may look like Steve Jobs’ twin, I’ll probably give Frames a shot.

I still haven’t purchased Rabbit’s R1 simply because it’s too far out to have any meaningful reaction to it to help you understand it’s potential in your life. At $699 (starting) plus a monthly service fee, I just can’t justify an investment in Humane’s AI Pin, though I’d love to experiment with it. To me, Humane is pursuing a post-screen or post smartphone world and I find that fascinating!

Disney introduces HoloTile concept to help you move through virtual reality safely

No one wants to end up like this…

https://twitter.com/briansolis/status...

Disney Research Imagineered one potential, incredibly innovative solution.

Designed by Imagineer Lanny Smoot, the HoloTile is the world’s first multi-person, omnidirectional, modular treadmill floor for augmented and virtual reality applications.


I'm a huge fan of @Disney #Iimagineering. ✨


Designed by Imagineer Lanny Smoot, the HoloTile is the world's first multi-person, omnidirectional, modular treadmill floor for augmented and virtual reality applications.


When AR and VR started to spark conversations about a… pic.twitter.com/h4xQ7JbFdK


— Brian Solis (@briansolis) February 5, 2024


The Disney HoloTile isn’t premised on a treadmill, instead it’s designed for today’s and tomorrow’s AR, VR, and spatial computing applications. I can only imagine applications that we’ll see on Apple’s Vision Pro and others in the near future.

AInsights

When AR and VR started to spark conversations about a metaverse, new omnidirectional treadmills emerged that were promising, but in more traditional ways.

It reminded me of the original smartphones. They were based on phones. When the iPhone was in development, the idea of a phone would be completely reimagined as, “a revolutionary mobile phone, combining three products—a widescreen iPod with touch controls, and a breakthrough Internet communications device with desktop-class email, web browsing, searching and maps—into one small and lightweight handheld device.”  In fact, after launch, the number of phone calls made on iPhones remained relatively flat while data usage as only continued to spike year after year.

Please subscribe to my newsletter,   a Quantum of Solis.

The post AInsights: MultiOn and Large Action Models (LAMs), Introducing Google Gemini and Its Version of Copilot, AI-Powered Frames, Disney’s HoloTile for VR appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 13, 2024 05:55

February 12, 2024

AInsights: Fighting Against Deepfakes, GenAI Gets into College, AI CarePods Brings Healthcare to You

A bad actor uses AI to create deepfakes to impersonate a user for deceptive purposes

AInsights: Executive-level insights on the latest in generative AI….

Meta leading effort to label AI-generated content

Nick Clegg, president of global affairs and communications at Meta, is pushing other companies to help identify A.I.-created content.Credit…Paul Morris/Bloomberg

One of the greatest threats building on profitable disinformation and misinformation campaigns is the rise of deepfakes.

In recent news, a finance worker was tricked into paying out $25 million after a video call with his company’s CFO and coworkers turned out to be fake. The participants were digitally recreated using publicly available footage of each individual.

Explicit deepfake images of Taylor Swift widely spread across social media in January. One image shared on X was viewed 47 million times.

AI’s use in election campaigns, especially those by opposing forces, is already deceiving voters and threatens to wreak havoc on democracies everywhere.

Policy and counter-technology must keep up.

On February 8th, the Federal Communications Commission outlawed robocalls that use voices generated by conversational artificial intelligence.

It’s easy to see why. Just take a look at the capability of tools that are meant to actually help business introduce AI-powered conversational engagement to humanize mundane processes such as Bland.ai. With a little training, AI tools such as Heygen could easily deceive people, and when put into the wrong hands, the consequences will be dire.

At the World Economic Forum in Davos, Nick Clegg, president of global affairs at Meta, said the company would lead technological standards to recognize AI markers in photos, videos and audio. Meta hopes that this will serve as a rallying cry for companies to adopt standards for detecting and signaling that content is fake.

AInsights

The collaboration among industry partners and the development of common standards for identifying AI-generated content demonstrate a collective, meaningful effort to address the challenges posed by the increasing use of AI in creating misleading and potentially harmful content.

Standards will only help social media companies identify and label AI-generated content, which will aid in the fight against misinformation/disinformation and protect identities and reputations of people against deepfakes. At the same time,

The introduction of AI markers and labeling standards is a significant step towards enhancing transparency, combating misinformation, and empowering users to make more informed choices about the content they encounter on digital platforms.

New services offer “human vibes” to people who use generative AI to do their work instead of using genAI to augment their potential

A student uses genAI to write their essay for university

This interesting Forbes article asks, “did you use ChatGPT on your school applications?”

Turns out, that using generative AI to do, ironically, the personal work of conveying why someone may be the best fit for a higher education institution, is overwhelming admissions systems everywhere.

To help, schools are increasingly turning to software to detect AI-generated writing. But, accuracy is an issue, leaving admissions offices, professors and teachers, editors, managers, and reviewers everywhere cautious of enforcing potential AI detection.

“It’s really an issue of, we don’t want to say you cheated when you didn’t cheat,” Emily Isaacs, director of Montclair State University’s Office for Faculty Excellence, shared with Inside Higher Ed.

Admissions committees are doing their best to train for patterns that may serve as a telltale sign that AI, and not human creativity, were used to write the application. They’re paying attention to colorful words, flowery phrases, and stale syntax according to Forbes.

For example, these experts are reporting that the following words have spiked in usage in the last year, “Tapestry,” “Beacon,” “Comprehensive curriculum,” “Esteemed faculty,” and “Vibrant academic community.”

To counter detection, in a way that almost seems counter intuitive, students are turning to a new type of editor to “humanize” AI output and help eliminate detectability.

“Tapestry” in particular is a major red flag in this year’s pool, several essay consultants on the platform Fiverr told Forbes.

This is all very interesting in that admissions offices are also deploying AI to automate the application review process and boost productivity among the workforce.

Sixty percent of admissions professionals said they currently use AI to review personal essays as well. Fifty percent also employ some form of AI chat bot to conduct preliminary interviews with applicants.

AInsights

I originally wasn’t going to dive into this one, but then I realized, this isn’t just about students. It’s already affecting workforce output and will only do so at speed and scale. I already see genAI overused in thought leadership among some of my peers. Amazon too is getting flooded with new books written by AI.

The equivalent of the word “tapestry” is recognizable everywhere, especially when you compare the output to previous works. But like admissions committees there is no clear solution.

And it makes me wonder, do we really need a platform that calls people out for using or over-using AI in their work? What is the spectrum of acceptable usage?What we do need is AI literacy for everyone, students, educators, policy makers, managers, to ensure that the human element, learning, expertise, potential, is front-and-center and nurtured as genAI becomes more and more pervasive.

AI doctors in a box are coming directly to people to make healthcare more convenient and approachable

ustomers can use Forward’s CarePod for $99 a month. FORWARD

Adrian Aoun is the cofounder of San Francisco-based health tech startup Forward, a primary care membership with gorgeous “doctor’s” offices that take make your healthcare proactive with 24/7 access, biometric monitoring, genetic testing, and a personalized plan for care.

Now Aoun announced $100 million in funding to introduce new 8×8 foot “CarePods” that deliver healthcare in a box in convenient locations such as malls and office parks.

The CarePod is designed to perform various medical tasks, such as body scans, measuring blood pressure, blood work, and conducting check-ups, without the need for a human healthcare worker on site. Instead, CarePods send the data to Forward’s doctors or real-time or follow-up consultations.

AInsights

AI-powered CarePods will make medical visits faster, more cost-effective, and I bet more approachable. There are skeptics though, and I get it.

Arthur Caplan, a professor of bioethics at New York University told Forbes, “The solution then isn’t to go to jukebox medicine.” The use of the word “jukebox” is an indicator. It tells me that things should work based on existing frameworks.

“Very few people are going to show up at primary care and say, ‘My sex life is crummy, I’m drinking too much, and my marriage is falling apart,” Caplan explained.

But my research over the years communicates the opposite, especially among Generation-Connected. It is easier for men, for example, to speak more openly about emotional challenges to AI-powered robots. I’m not saying it’s better. I’ve observed time and time again, that the rapid adoption of technology in our personal lives is turning us into digital narcissists and digital introverts. Digital-first consumers want things faster, more personalized, more convenient, more experiential. They take to technology first.

 “AI is an amazing tool, and I think that it could seriously help a lot of people by removing the barriers of availability, cost, and pride from therapy,” Dan, a 37-year-old EMT from New Jersey, told Motherboard.

CarePods aim to remove the impersonal, sanitized, beige, complex, expensive, clip-board led healthcare experiences that many doctors’ offices provide today. If technology like this makes people take action toward improving their health, they let’s find ways to validate and empower it. We’ll most likely find, that doing so will make healthcare proactive versus reactive.

Please subscribe to my newsletter, a Quantum of Solis.

The post AInsights: Fighting Against Deepfakes, GenAI Gets into College, AI CarePods Brings Healthcare to You appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 12, 2024 06:11

February 11, 2024

Introducing AInsights: Executive-level insights on the latest in generative AI

I started writing more and more about generative AI so that I could keep up with important trends and what they mean in my world.

I gave it a name with a simple tagline:

“AInsights: Executive-level insights on the latest in generative AI.”

Here’s the current catalog if you’d like to dive-in with me…

The New Google Bard Aka Gemini, Neuralink Trials, Bland Conversational AI Agents, Deep Fakes, And GPT Assistants – Link

AI Is Coming For Jobs, Robots Are Here To Help, AGI Is On The Horizon, Google Has A Formidable Competitor – Link

Anthropic Funding, Midjourney Antifunding, Rabbit R1 Sellout Debut, And Microsoft Copilot’s Takeoff – Link

GPTs, AI Influencers, Digital Twins, Microsoft, AI Search – Link

Prompt Engineering: Six Strategies For Getting Better Results – Link

The Four Waves Of Generative AI, We’re In Wave 2 According To Mustafa Suleyman – Link

A Generative AI Video Timeline of the Top Players – Link

A Guided Tour Of Apple’s Vision Pro And A Glimpse Of The Future Of [Spatial] Computing – Link

Google Cuts, Expected Industry Cuts, GenAI’s $1 Trillion Trajectory – Link

NY Times Sues OpenAI and Next-Gen AI Devices – Link

Introducing the GenAI Prism with JESS3 and Conor Grennan – Link

OpenAI and the Events that Caused the Crazy Four Days Between Sam Altman’s Firing and Return – Link

Please subscribe to my newsletter, a Quantum of Solis.

The post Introducing AInsights: Executive-level insights on the latest in generative AI appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2024 11:57

February 5, 2024

AInsights: The New Google Bard aka Gemini, Neuralink Trials, Bland Conversational AI Agents, Deep Fakes, and GPT Assistants

Created with Google Bard aka Gemini

AInsights: Generative insights in AI. Executive-level insights on the latest in generative AI….

Bard gets another massive update, attempting to pull users from competitive genAI sites such as ChatGPT and Midjourney; Google is reportedly rebranding Bard as Gemini

Google Bard released a significant round of new updates, including:

Integration with Google Services: Bard now draws information from YouTube, Google Maps, Flights, and Hotels, allowing users to ask a wider variety of questions and perform tasks like planning trips and obtaining templates for specific purposes.

Multilingual Support: The chatbot can now communicate in over 40 multiple languages.

Image Generation: Google has added image generation capability, allowing it to create images powered by the creativity and imagination of your prompts. The lead image was developed using Bard.

Email Summarization and Sharing: Bard now offers better and more thorough summaries of recent emails and allows for the sharing of conversations, including images, to better appreciate the creative process.

Fact-Checking and Integration with Google Apps: You can now fact-check in Bard, which is a big plus with Google search engine competitor Perplexity.

Users can also integrate with Google Workspace and import data from other Google apps, such as summarizing PDFs from Google Docs. This appears to be taking steps toward competing against Microsoft’s co-pilot, which already has much ground to make-up.

Google Bard Insights

At this point, active genAI users are onboard or Google will have to work to entice new users. This is perhaps why Google is said to be in the motion of rebranding Bard as Gemini and is set to launch a dedicated app to compete against ChatGPT and other focused genAI applications.

Incremental releases won’t accomplish that so much. But I am impressed to see Google moving to wave 3 of genAI in becoming a system of action.

Neuralink implants chip in first human brain.

Neuralink, a brain-chip startup founded in 2016 founded by Elon Musk received U.S. FDA approval for human trails. The company successfully conducted its first user installation and the patient is recovering well.

Neuralink is developing a brain-computer interface (BCI) that implants a device (N1) about the size of a coin in the skull, with ultra-thin wires going into the brain. The N1 implant allows individuals to wirelessly control devices using their brain to potentially restore independence and improve the lives of people with paralysis.

Neuralink AInsights

The success of this and all subsequent trials will serve as the foundation for potential medical applications that could revolutionize the lives of people with severe paralysis. Neuralink’s BCI could give patients control of computers, robotic arms, wheelchairs, and other life changing devices through their thoughts.

It’s also important to remember that one of Neuralink’s initial ambitions is to meld brains with AI for human-machine collaboration.

Bland.ai introduces a new genre of chatbot that literally talks to you, like a human

I remember before the launch of ChatGPT, in 2018, Google was accused of faking a big AI demo of using virtual-assistant AI to call a hair salon and make reservations on a user’s behalf.

Head on over to Bland.ai to receive a call from Blandy and be introduced to a new genre of conversational AI.

Bland.ai introduced an AI call center solution that’s reportedly capable of managing 500,000 simultaneously operating in anyone’s voice.

Bland.ai AInsights

Initial use cases point to lead qualification calls. When connected to a CRM, the AI can automate initial lead calls to save time, reduce human error, and surface important insights into prospect preferences and behaviors.

There are pros and cons of technology like this. It’s inevitable. This technology, IMHO, doesn’t replace SDRs, it sets them up for human-centered success and increases their value.

AI-generated content, including calls and customer interactions, may lack the human touch, creativity, and ability to think outside the box, leading to repetitive, monotonous, and bland (get it?) content. Human touch counts for everything.

But, warning, this technology helps while it also empowers the dark side of AI and humanity. Conversational AI can lower our defense mechanisms by making everyday transactional engagement acceptable with AI. Deep fakes will pounce on this.

Recently, a finance worker at a multinational firm was coaxed into paying out $25 million to deepfake fraudsters posing as the company’s chief financial officer in a video conference call.

The growing libraries of specifically focused GPTs can now be summoned in real-time in ChatGPT sessions

Sometimes you don’t need to carry the heavy lifting for prompt engineering in unchartered territory. When ChatGPT introduced the GPT Store, the intent was to connect users to useful, custom versions of ChatGPT for specific use cases.

Now, you can integrate GPTs into prompts to accelerate desired outcomes in specific applications.

GPT AInsights

We don’t know what we don’t know. And that’s important when we don’t know what to ask or what’s possible. Sometimes it can take a series of elaborate, creative, meandering prompts to help us achieve the outcomes needed for each step we’re trying to accomplish in any new process.

AI founder and podcast host Jaeden Schafer shared an example use case where stacking GPTs in the same thread could help users empower founders. “One males a logo, Canva makes a pitch deck, another writes sales copy, etc.,” he shared.

The post AInsights: The New Google Bard aka Gemini, Neuralink Trials, Bland Conversational AI Agents, Deep Fakes, and GPT Assistants appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2024 07:10

January 25, 2024

Brian Solis to Keynote Baker Hughes Annual Meeting, Energizing Change, in Florence, Italy

Brian Solis was announced as the closing keynote speaker at the Baker Hughes annual meeting in Florence, Italy. At Energizing Change, Brian will explore a post digital transformation world to build the foundation for AI-First models and mindsets.  Brian believes that in every strategic decision we face, we need to start by asking this differentiating question…

#WWAID (what would AI do?)

The post Brian Solis to Keynote Baker Hughes Annual Meeting, Energizing Change, in Florence, Italy appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 25, 2024 10:28

January 22, 2024

AInisights: AI is Coming for Jobs, Robots are Here to Help, AGI is on the Horizon, Google has a Formidable Competitor

Created with DALL-E

AInsights: Generative insights in AI. This series rapid fire series offers executive level AInsights into the rapidly shifting landscape of generative AI. 

OpenAI CEO Sam Altman says artificial general intelligence is on the horizon but will change the world much less than we fear.

Rumor is that Sam Altman was fired as CEO by the board over concerns in how Altman was aggressively pursuing artificial general intelligence (AGI). Now Altman is on record saying that our concerns over AGI may be overblown.

“People are begging to be disappointed and they will be,” said Altman.

AGI is next-level artificial intelligence that can complete tasks on par with or better than humans.

Altman believes that AGI will become an “incredible tool for productivity.”

“It will change the world much less than we all think and it will change jobs much less than we all think,” Altman assured

AInsights

Make no mistake. The stakes will only get higher. They already are with the arrival of generative AI.

In a conversation with Satya Nadella, Altman reminded us that there is no “magic red button” to stop AI.

But at the same time, we as humans are already evolving by using AI and we will continue to do so.

“The world had a two week freakout over ChatGPT 4. ‘This changes everything. AGI is coming tomorrow. There are no jobs by the end of the year.’ And now people are like, ‘why is it so slow!?’,” Altman joked with the Economist.

While he is aiming to downplay the role of AGI, generative AI is already showing us what we can expect. We need a new era of AI-first leadership. We need decision-makers who can not only automate work but also imagine, inspire, and empower a renaissance, one where workers and jobs are augmented with AI to unlock new potential and performance.

DeepMind co-founder Mustafa Suleyman warns AI is a ‘fundamentally labor replacing’ tool over  the long term.

A session at the World Economic Forum (WEF) in Davos, said the quiet part out loud, AI is coming for jobs. I feel like this is more than just a bullet in this AInisights update. This is a canary in the coal mine if that’s still a relevant reference point.

In an interview with CNBC, Suleyman was asked by if Ai was going to replace humans in the workplace.

This was his answer, “I think in the long term—over many decades—we have to think very hard about how we integrate these tools because, left completely to the market…these are fundamentally labor replacing tools.”

And he’s right. Let’s get that part out of the way.

Left to their own devices, executives naturally gravitate toward what they know, automation and cost-cutting to increase margins and profitability. You have to center insights on what drives their KPIs.

AInsights

Suleyman is not saying this from a doomsday perspective. He’s asking us to think differently about how we use AI to augment our work, and more importantly, our thinking…

In reality, executives have never, we’ve never, had access to technology that can collaborate and create. We’ve always been the source of creation. Executive decision-making is what drives businesses. At most, digital transformation, predictive analytics, have aided in decision-making. But now, AI can now augment decision-making, perform work, make us smarter and more capable, and unlock new possibilities.

That takes a new vision and a new set of KPIs to attach to executive decision-making.

For example, IKEA launched an AI bot named Billie to lead first level contact with customers, effectively managing 47% of customer queries directed to the call centers.. In a time when everyone is talking about AI replacing jobs, IKEA trained 8,500 call center workers to serve as interior design advisers, generating 1.3 billion euros in one year.

Stanford AI and Robotics debut developmental robot that makes the Jetsons a reality.

Zipping Fu, Stanford & Robotics PhD at Stansford AI Lab, shared his team’s incredible project, Mobile ALOHA. The team includes Tony Zhao and Chelsea Finn.

If you ever watched the Jetsons, then you, like me, wished that we’d see Rosey come to life one day. And from an R&D perspective, that’s exactly what’s happening in Silicon Valley .

The advanced project and how they’re training Mobile ALOHA:

Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection.

To date, their testing now includes 50 autonomous demos to accomplish complex mobile manipulation tasks.

Tasks include laundry, self-charging, vacuuming, watering plants, loading and unloading a dishwasher, making coffee, cooking , getting drinks from a refrigerator, opening a beer, opening doors, playing with pets, doing laundry, and so much more.


Mobile ALOHA's hardware is very capable. We brought it home yesterday and tried more tasks! It can:
– do laundry👔👖
– self-charge⚡
– use a vacuum
– water plants🌳
– load and unload a dishwasher
– use a coffee machine☕
– obtain drinks from the fridge and open a beer🍺
– open… pic.twitter.com/XUGz7NhpeA


— Zipeng Fu (@zipengfu) January 4, 2024



Introducing 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 — Hardware!
A low-cost, open-source, mobile manipulator.


One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn.


At the end, what's better than cooking yourself a meal with the 🤖🧑‍🍳 pic.twitter.com/iNBIY1tkcB


— Tony Z. Zhao (@tonyzzhao) January 3, 2024



Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 — Learning!


With 50 demos, our robot can autonomously complete complex mobile manipulation tasks:
– cook and serve shrimp🦐
– call and take elevator🛗
– store a 3Ibs pot to a two-door cabinet


Open-sourced!


Co-led @tonyzzhao, @chelseabfinn pic.twitter.com/wQ2BLDLhAw


— Zipeng Fu (@zipengfu) January 3, 2024


AInsights

While early in its development, Mobile ALOHA is one of the many robotics projects being trained to perform complex human tasks.  In these demos, we see tele-operated tasks, but the team does have autonomous results for a smaller set of tasks. The point is that this is how AI robots learn how to repeat and optimize tasks.

For comparison, this is Tesla’s Optimus robot learning how to fold clothes.


This is the @Tesla Optimus robot. Here it learns to fold a shirt. Next, it will do this autonomously, in most environments, at speed.


pic.twitter.com/fxEilqp0BL


— Brian Solis (@briansolis) January 21, 2024


Perhaps more so, the fact that Mobile ALOHA is accelerating at Stanford, we can only imagine how many advanced AI-powered robotics initiatives are taking place around the world that we don’t know about. At any moment, breakthroughs will reach the masses and suddenly your next big purchase might not only be aimed at an Apple Vision Pro.

AI-powered ‘answers’ engine Perplexity raises $73.6 million, now valued $520 million, expected to compete against Google.

Perplexity is an AI-powered answers engine that also cites its sources. As a former analyst, I very much appreciate the ability to fine-tune academic sources. This is one of the very few generative AI tools that I use on a daily bases.

The company raided $73.6 million in a funding round led by IVP with additional investments from Databricks Ventures, Shopify CEO Tobi Lutke, NVIDIA, and Jeff Bezos. That’s a pretty powerful group betting on the future of AI-powered search.

The company was founded by Aravind Srinivas, Denis Yarats, Johnny Ho and Andy Konwinski. Srinivas previously researched languages and GenAI models at OpenAI.

Perplexity aims to address weaknesses in current search models, particularly in providing transparency and accuracy (read SEO/SEM/LLM). Additionally, Perplexity provides a conversational AI interface that tunes search queries with follow-up questions, helping users get more relevant answers. Unlike conventional search engines, Perplexity.ai employs deep learning algorithms to gather information from diverse sources and present it in a concise and accurate manner.

AInsights

I use Perplexity for its focus on transparency and ability to elevate and hone answers based on prompts and continuing exchanges. Google still plays a big role in my search behavior. But, and I’ve said this going back to every social media attempt, the company lacks a human algorithm. Traditional search is an always-evolving machine algorithm. But when someone asks a question, they do so from a human perspective. Human-centered algorithms can account for intent and desired outcome to then activate algorithms accordingly. Language vs. rank becomes key here.

On a side note, Perplexity and Rabbit announced a partnership to integrate the answer engine into the R1.


We're thrilled to announce our partnership with Rabbit: Together, we are introducing real-time, precise answers to Rabbit R1, seamlessly powered by our cutting-edge PPLX online LLM API, free from any knowledge cutoff. Plus, for the first 100,000 Rabbit R1 purchases, we're… pic.twitter.com/hJRehDlhtv


— Perplexity (@perplexity_ai) January 18, 2024


Stability AI introduces a smaller, more efficient language model (aka small language model) to democratize generative AI.

Stability AI, makers of Stable Diffusion, released its smallest LLM to date, Stable LM 2 1.6B. This is really about enhancing AI accessibility and performance, while also compromising the very same things for those partners looking to go bigger. The model is smaller and more efficient, aiming to lower barriers and enable more developers to utilize AI in various languages (English, Spanish, German, Italian, French, Portuguese, and Dutch).

The release represents the company’s commitment to continuous improvement and innovation in the field of artificial intelligence.

AInsights

This release signifies a major advancement in AI language models, particularly in the multilingual and 3D content creation domains. The model’s outperformance of other small AI language models highlights its potential to drive innovation and competitive advantage for businesses investing in AI technologies. Said simply, this presents an opportunity for executives to leverage cutting-edge AI capabilities for driving innovation, improving accessibility, and staying ahead in the competitive landscape.

The model’s superior performance lowers hardware barriers, allowing more developers to participate in the generative AI ecosystem. This can be leveraged by executives to drive innovation, improve accessibility, and gain a competitive edge in the rapidly evolving AI landscape. Additionally, the model’s integration of Content Credentials and invisible watermarking makes it suitable for a wide range of applications, including automated alt-text generation and visual question answering. This positions Stable LM 2 1.6B as a valuable tool for enhancing accessibility and expanding the reach of AI-driven solutions, which is increasingly important in today’s business environment.

 

TikTok gives users the ability to use generative AI to create songs (sort of).

TikTok has become an incredible platform for discovering new music and also for launching new artists. Now, with AI Song, users can create new music with text prompts.

AInsights

This isn’t yet what we think. It’s more of an AI song collaborator using an existing music library.

“It’s not technically an AI song generator — the name is likely to change and it is currently in testing at the moment,” Barney Hooper, spokesperson for TikTok said in an email to The Verge. “Any music used is from a pre-saved catalog created within the business. In essence, it pairs the lyrics with the pre-saved music, based on three genres: pop, hip-hop, and EDM.”

This also opens up a can of worms. With the New York Times suing OpenAI over copyright infringement, and OpenAI and Apple and others striking deals with media companies to allow LLMs to train on their data, TikTok and others have to build on a more collaborative foundation.

AI is coming for creatives according to FT.

I guess we’re going to have to get used to headlines like this, in every industry.

According to FT, Architects are incorporating DALL-E, Midjourney, and other generative AI tools for complex design work, threatening the industry of illustrators and animators.

You’ll notice similar observations in this article, Suleyman’s interview, and from others. Up until now, humans were the only species that could conceive and build on original ideas. AI is now an infinite idea factory, building on the ideas of humankind. AI has been trained on everything that humans have created.

We should not fear this, we should embrace it if we’re to move forward and upward.

AInsights

Here, we need to revisit the conversation where Suleyman warned every business that AI will replace certain jobs. As humans, we need to stoke our creativity, our imagination, and most importantly, our empathy, to create and collaborate with AI in ways that weren’t possible before.

We can’t fear AI and robots if we’re not willing to stop acting like them in our work. AI is creating new opportunities and areas for innovation, that in all honesty, are overdue.

Wayne He is the founder of XKool in Shenzhen, a team of former Google engineers, AI scientists, mathematics, and cross-over designers. Her words close out these AInsights…

“In the future, architects will be empowered to show the client thousands of options and refine the best one so that even on a low budget you will be able to get the best building. We worry about AI escaping human control and causing a disaster for mankind, and in my novels most of the future AI scenarios are not optimistic. But it is this writing which gave me an awareness to prevent these things happening. AI should be a co-pilot and a friend, not a replacement for architects.”

Please subscribe to my newsletter, a Quantum of Solis.

The post AInisights: AI is Coming for Jobs, Robots are Here to Help, AGI is on the Horizon, Google has a Formidable Competitor appeared first on Brian Solis.

 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2024 10:20