“
6Pages write-ups are some of the most comprehensive and insightful I’ve come across – they lay out a path to the future that businesses need to pay attention to.
— Head of Deloitte Pixel
“
At 500 Startups, we’ve found 6Pages briefs to be super helpful in staying smart on a wide range of key issues and shaping discussions with founders and partners.
— Thomas Jeng, Director of Innovation & Partnerships, 500 Startups
“
6Pages is a fantastic source for quickly gaining a deep understanding of a topic. I use their briefs for driving conversations with industry players.
— Associate Investment Director, Cambridge Associates
Read by
Used at top MBA programs including
Oct 11 2024
11 min read
1. Collaborative AI is what’s next
- The past few weeks have seen product launches that are pointing to collaborative AI being what’s next in generative AI (after, or really in parallel with, agentic AI). The most recent introduction was OpenAI’s Canvas last week. Canvas is an alternative way to work with ChatGPT on writing/coding projects that need editing and revisions, through workspaces that facilitate persistent, in-line collaboration. Imagine you’re working with a virtual colleague in real-time on an editable document or codebase – except that collaborator happens to be AI.
- Canvas follows in the footsteps of Anthropic’s Artifacts, Google’s Gemini for Workspace and Gemini Gems, Microsoft’s GitHub Copilot Workspaces and Copilot Pages, and infinite-board startups Kuse and Cove, not to mention the growing swaths of AI writing software (e.g. Hyperwrite, Jasper, JotBot, Rimbaud), AI coding companions (e.g. Cursor), and collaborative AI agents (e.g. SAP’s Joule) and models (e.g. e.g. MIT’s Co-LLM). While each has its own nuances (e.g. writing vs. coding, direct editing vs. AI-only, integrated vs. standalone), they all focus on making it easier for users to collaborate with AI – and in some cases, for AI to collaborate with AI.
- For OpenAI’s Canvas, the original chat-based model is still available but users now have the option to use Canvas through ChatGPT’s model-picker dropdown (listed as “ChatGPT 4o with canvas”). It opens up as an expanded window with the working draft on the right and a chat bar on the left. (The canvases for writing and code-editing are slightly different, with the latter having line numbers.) ChatGPT can also automatically trigger a canvas to open based on the prompt.
- Canvas users can make their own direct edits in the working draft, or highlight the specific lines they want the AI to work on and provide direction on edits. For a written article, the model can generate a draft, revise for tone and length, adjust the reading level, suggest edits, and do final checks (e.g. grammar, clarity, consistency). For code, the model can generate code, debug any issues, suggest changes, translate into a different language (a feature that Anthropic’s Artifacts lacks), and add comments and logs. Users can revert back to a prior version by using the “back” button.
- Canvas – which currently uses GPT-4o as its underlying model – was built using open-source libraries ProseMirror (in-browser rich-content editor) and CodeMirror (in-browser code editor). OpenAI used new synthetic-data generation techniques (e.g. distilled o1 outputs) to post-train the model for improved writing quality and user interactions.
- Canvas was rolled out in early beta to ChatGPT Plus and Team users globally last week, and Enterprise and Edu users received access yesterday. Once Canvas is out of beta, OpenAI expects to make Canvas available to Free users as well.
- Canvas isn’t perfect but reports so far are generally positive. People are using Canvas to turn Dribbble designs into code, solve coding problems, write research reports, build an AI agent to find Tech Week events, and more. Still, Canvas is only as good as the underlying model allows it to be – it can still hallucinate or just get things wrong.
- If we assume the industry will catch up to Anthropic’s Artifacts, which was launched Jun 2024, what we’ll likely see next is visual prototypes displayed alongside working code, and the ability to readily remix and share instances with others. (Microsoft’s Copilot Pages also has shareable links and the ability to tag colleagues.) Artifacts has been generally available since Aug 2024 to all Claude users, including those on the Free plan – which makes it not-too-surprising that OpenAI plans to make Canvas available to Free users once it’s out of beta.
- How people work with AI is perhaps as important as what the underlying model can do. Through one lens, products like Canvas aren’t inventing anything new – they’re just offering a new interface for existing models (e.g. GPT-4o). On the other hand, the value of ChatGPT at its introduction was in the accessibility of the new interface, not its technological novelty, and it ended up going viral and becoming a real inflection point for OpenAI. OpenAI is calling Canvas “the first major update to ChatGPT’s visual interface since we launched two years ago.”
- Inevitably, these features will become a more seamless part of productivity suites that users are already using (e.g. Google Docs, Microsoft Word). It’s not clear whether OpenAI’s foray with Canvas is more of a stop-gap measure to fend off Anthropic (and capitalize on the talent it’s poached from Anthropic), or if it’s actually strategic. Certainly, the more these features are baked into the larger players’ offerings, the harder it will be for startups building applications on top of 3rd-party models to offer something differentiated. It will also inevitably start to render other kinds of startups (e.g. Visual Studio Code extensions, coding assistants, personalized assistants) obsolete.
- Canvas research lead Karina Nguyen (formerly of Anthropic) views collaboration with AI as inextricably intertwined with personalization: “Autocompleting a human’s thought during creative processes like writing or coding is the ultimate personalization.” Others believe that personalization will go beyond autocompletion, to the anticipation of needs. Nguyen’s “vision for the ultimate AGI interface is a blank canvas,” one that “self-morphs over time with human preferences.” A standalone blank canvas may not, however, be the most accessible interface for the average user – and it’s hard to create a personalized experience if the user isn’t using it.
Related Content:
- Oct 4 2024 (3 Shifts): OpenAI’s for-profit transformation
- Sep 20 2024 (3 Shifts): Big SaaS vendors are rolling out AI agents
Become an All-Access Member to read the full brief here
All-Access Members get unlimited access to the full 6Pages Repository of688 market shifts.
Become a MemberAlready a Member?Log In
Disclosure: Contributors have financial interests in Meta, Microsoft, Alphabet, and OpenAI. Google and OpenAI are vendors of 6Pages.
Have a comment about this brief or a topic you'd like to see us cover? Send us a note at tips@6pages.com.
All Briefs
Get unlimited access to all our briefs.
Make better and faster decisions with context on far-reaching shifts.
Become a MemberAlready a Member?Log In
Get unlimited access to all our briefs.
Make better and faster decisions with context on what’s changing now.
Become a MemberAlready a Member?Log In