Collaboration, AI, and the Long Game

[image error]

DALL\u00b7E prompt: stubborn executive computer\n

Your department or employer has embraced Slack, Microsoft Teams, or another internal collaboration hub. You regularly send your colleagues direct messages. You frequently post messages in channels. Some contain brief videos that you record to save time. As you type a new message, you see the following prompts:\n\nElaine doesn’t typically respond to messages containing more than 100 words. Consider calling her or scheduling time with her instead.\nDespite being a member of [channel name], Kramer isn’t active on it. Send him a DM instead if you want him to respond.\nMembers of [channel name] are far more likely to watch a video than read a long message.\nDid you know that George asked this question a year ago in [channel name] and Jerry answered it? Click here to view it.\n\n

All of these scenarios sound far-fetched, right?\n


A few years ago, sure.\n


Today, however, not so much.\n


Generative AI is coming to collaboration and communication applications. (Exhibit A: the recently announced Slack GPT.) Make no mistake: As I predicted in the final chapter of Reimagining Collaboration, these technologies can and will immensely improve how we work. They will save us a great deal of time and help us prioritize what we do.\nUnder the Hood\n

Large language models underpin today’s crop of robust generative AI tools, but LLMs don’t learn by themselves. They require data. Lots and lots of data.\n

And here’s why it’s essential that all\u2014yes, all\u2014employees communicate via internal collaboration hubs as much as possible: More data in Slack, MS Teams, Zoom, and the ilk results in deeper learning. LLMs get smarter. In turn, they will make better connections and suggestions down the road.\nSimon Says: Think bigger.\n

Every internal e-mail, text, WhatsApp message, and communication outside of the hub represents a missed opportunity to train and improve LLM. (Remember that GPT stands for generative pre-trained transformer.) Over time, these omissions add up. Recommendations won’t be as relevant\u2014and some will be grossly inaccurate.\n

Think about that next time people start conversations via e-mail or divert an existing conversation from the hub to an inbox.","tablet":"


[image error]

DALL\u00b7E prompt: stubborn executive computer\n


Your department or employer has embraced Slack, Microsoft Teams, or another internal collaboration hub. You regularly send your colleagues direct messages. You frequently post messages in channels. Some contain brief videos that you record to save time. As you type a new message, you see the following prompts:\n\nElaine doesn't typically respond to messages containing more than 100 words. Consider calling her or scheduling time with her instead.\nDespite being a member of [channel name], Kramer isn't active on it. Send him a DM instead if you want him to respond.\nMembers of [channel name] are far more likely to watch a video than read a long message.\nDid you know that George asked this question a year ago in [channel name] and Jerry answered it? Click here to view it.\n\n

All of these scenarios sound far-fetched, right?\n


A few years ago, sure.\n


Today, however, not so much.\n


Generative AI is coming to collaboration and communication applications. (Exhibit A: the recently announced Slack GPT.) Make no mistake: As I predicted in the final chapter of Reimagining Collaboration, these technologies can and will immensely improve how we work. They will save us a great deal of time and help us prioritize what we do.\nUnder the Hood\n

Large language models underpin today's crop of robust generative AI tools, but LLMs don't learn by themselves. They require data. Lots and lots of data.\n

And here's why it's essential that all\u2014yes, all\u2014employees communicate via internal collaboration hubs as much as possible: More data in Slack, MS Teams, Zoom, and the ilk results in deeper learning. LLMs get smarter. In turn, they will make better connections and suggestions down the road.\nSimon Says: Think bigger.\n

Every internal e-mail, text, WhatsApp message, and communication outside of the hub represents a missed opportunity to train and improve LLM. (Remember that GPT stands for generative pre-trained transformer.) Over time, these omissions add up. Recommendations won't be as relevant\u2014and some will be grossly inaccurate.\n

Think about that next time people start conversations via e-mail or divert an existing conversation from the hub to an inbox.","phone":"


Your department or employer has embraced Slack, Microsoft Teams, or another internal collaboration hub. You regularly send your colleagues direct messages. You frequently post messages in channels. Some contain brief videos that you record to save time. As you type a new message, you see the following prompts:\n\nElaine doesn't typically respond to messages containing more than 100 words. Consider calling her or scheduling time with her instead.\nDespite being a member of [channel name], Kramer isn't active on it. Send him a DM instead if you want him to respond.\nMembers of [channel name] are far more likely to watch a video than read a long message.\nDid you know that George asked this question a year ago in [channel name] and Jerry answered it? Click here to view it.\n\n

All of these scenarios sound far-fetched, right?\n


A few years ago, sure.\n


Today, however, not so much.\n


Generative AI is coming to collaboration and communication applications. (Exhibit A: the recently announced Slack GPT.) Make no mistake: As I predicted in the final chapter of Reimagining Collaboration, these technologies can and will immensely improve how we work. They will save us a great deal of time and help us prioritize what we do.\nUnder the Hood\n

Large language models underpin today's crop of robust generative AI tools, but LLMs don't learn by themselves. They require data. Lots and lots of data.\n

And here's why it's essential that all\u2014yes, all\u2014employees communicate via internal collaboration hubs as much as possible: More data in Slack, MS Teams, Zoom, and the ilk results in deeper learning. LLMs get smarter. In turn, they will make better connections and suggestions down the road.\nSimon Says: Think bigger.\n

Every internal e-mail, text, WhatsApp message, and communication outside of the hub represents a missed opportunity to train and improve LLM. (Remember that GPT stands for generative pre-trained transformer.) Over time, these omissions add up. Recommendations won't be as relevant\u2014and some will be grossly inaccurate.\n

Think about that next time people start conversations via e-mail or divert an existing conversation from the hub to an inbox."}},"slug":"et_pb_text"}" data-et-multi-view-load-tablet-hidden="true" data-et-multi-view-load-phone-hidden="true">


DALL·E prompt: stubborn executive computer


Your department or employer has embraced Slack, Microsoft Teams, or another internal collaboration hub. You regularly send your colleagues direct messages. You frequently post messages in channels. Some contain brief videos that you record to save time. As you type a new message, you see the following prompts:

Elaine doesn’t typically respond to messages containing more than 100 words. Consider calling her or scheduling time with her instead.Despite being a member of [channel name], Kramer isn’t active on it. Send him a DM instead if you want him to respond.Members of [channel name] are far more likely to watch a video than read a long message.Did you know that George asked this question a year ago in [channel name] and Jerry answered it? Click here to view it.

All of these scenarios sound far-fetched, right?


A few years ago, sure.


Today, however, not so much.


Generative AI is coming to collaboration and communication applications. (Exhibit A: the recently announced Slack GPT.) Make no mistake: As I predicted in the final chapter of Reimagining Collaboration, these technologies can and will immensely improve how we work. They will save us a great deal of time and help us prioritize what we do.

Under the Hood

Large language models underpin today’s crop of robust generative AI tools, but LLMs don’t learn by themselves. They require data. Lots and lots of data.


And here’s why it’s essential that all—yes, all—employees communicate via internal collaboration hubs as much as possible: More data in Slack, MS Teams, Zoom, and the ilk results in deeper learning. LLMs get smarter. In turn, they will make better connections and suggestions down the road.

Simon Says: Think bigger.

Every internal e-mail, text, WhatsApp message, and communication outside of the hub represents a missed opportunity to train and improve LLM. (Remember that GPT stands for generative pre-trained transformer.) Over time, these omissions add up. Recommendations won’t be as relevant—and some will be grossly inaccurate.


Think about that next time people start conversations via e-mail or divert an existing conversation from the hub to an inbox.

The post Collaboration, AI, and the Long Game appeared first on Phil Simon.

 •  0 comments  •  flag
Share on Twitter
Published on May 05, 2023 06:14
No comments have been added yet.