Zack Kass, Head of GTM for OpenAI, kicked off Madrona Venture Labs and Madrona’s Launchable: Foundation Models event series on January 23. Over 100 builders, investors, and community members tuned in to learn what we can expect from OpenAI this year. “The OpenAI mission is to ensure that artificial general intelligence (AGI) is to benefit all of humanity," opened Kass. With both Microsoft and customers on the journey with them, Kass said OpenAI is going to continue to keep this front and center.
OpenAI’s platform hosts their technologies via API: GPT, Codex, DALL-E 2, and Whisper, a model trained using licensed speech data that can transcribe and interpret speech that will go live in the API next week. Throughout the year they are going to continue to add modalities and upgrade their current technology, and users can expect more advanced API endpoints.
Kass also touched on ChatGPT, a web app in the public domain powered by a customized GPT 3.5 engine launched as a research preview.
“It served as a reminder that the application layer is really important and how people engage with these models is almost just as important as the models themselves,” Kass said.
Kass left the floor open for questions for the rest of the presentation, touching more deeply on topics such as GPT and ChatGPT, creating safe and unbiased technology, the future of OpenAI, and advice for entrepreneurs and builders.
As Kass explained, GPT has a family of models. ChatGPT is built off of underlying GPT 3, specifically Davinci 003. It is a slightly more aligned model trained on dialogue that can interact with users more closely in the form of speech like a human would.
There is a lot of work being put in to ensure that future models of GPT are better at translating languages other than English, and Kass hopes it can eventually be fluent in all written languages.
ChatGPT also recently received HIPAA compliance, and in tandem with Microsoft Azure, this will enable more work to be done without the interference of red tape, paving the way for healthcare solutions in the future.
In addition to addressing the potential for foundation models, Kass responded to worries that ChatGPT could decrease the diversity of results, hurt creativity, and cause students to be more apt to cheat.* Kass compared this concern and apprehension to that of the dot-com era. However, he believes the long term benefits of foundation models far outweigh the short term implications.
Kass went on to say that although many educators are banning GPT, OpenAI is doing a lot of work to update policymakers and institutions on the benefits of GPT and the opportunities for educators. Kass described how it can be used in the future as a tool to help create personalized tutors for students, and could be provided to support kids that may not be able to afford it themselves.
They are also working to create holistic solutions for plagiarism detection. The current question is whether the solution is on the front end or back end, but Kass says that in the distant future there will be tools to determine creatorship. There could even be the potential for AI to be a cited contributor in academia.
OpenAI is ensuring that safety and alignment get the attention that they deserve by prioritizing their leadership and resources in this area. They are working to decrease bias within their technology and have already made visible progress in how their technology defaults, according to Kass.
They are also working to mitigate misinformation and disinformation. Their policy, safety, and product teams all work together to ensure they are careful about what their models can do before they are published, and ChatGPT has a variety of guardrails to further help with this.
In terms of how the industry can reward individuals whose data contributed to the training of these machines, Kass explained that there is currently no good system to ensure primary creators get the credit they deserve. It is too difficult to know which data points were used and how much each data point was used in the final product, which adds an extra layer of difficulty.
In the future, people could get credits for prompts they design or apps they have designed within an AI app store, but for now, Kass compared it to citing sources in academia. We don’t have a good system to give them credit other than to thank them.
Kass said that “there's no good excuse” for why Codex is still in beta, but it will be released soon. There will also be new, different, and interesting product extensions on the horizon, yet Kass was unable to talk about any specifics.
OpenAI will continue to focus almost exclusively on foundational models and allow others to do the rest. Kass said that the company will probably never become one that allows someone to fulfill a specific task, but they want the platform to continue to enable others to build those types of products.
The team is also excited for their expanded partnership with Microsoft. They expect to partner and expand on development, marketing and enterprise initiatives, and are excited to use Azure as a springboard and catalyst for new work.
Kass believes that EdTech is a huge potential avenue for AI in the future. While it would start out slow, Kass believes that these technologies will eventually be the norm.
“We could build customized tutors, we can do on-the-fly testing,” Kass said. “You can imagine a world where kids who don't want to wake up at seven in the morning can now learn at the hours that they're excited to learn because their customized tutor is willing to stay awake with them all night.”
For those interested in startups, Kass recommends getting involved in open-source and startup communities. Going to events and connecting with like-minded people will be especially helpful. Similarly, tons of people are hiring specialists to integrate AI into their companies to ensure they stay relevant, so there is a lot of opportunity for people to put their foot in the door.
“Using this technology, and I think just generally becoming an expert in it, will serve people pretty well.” Kass said.
. . .
*As of January 31, 2023, OpenAI released a new "AI text classifier.” Though imperfect, it is OpenAI’s first step in helping educators “distinguish between AI-written and human-written text.”
We are with our founders from day one, for the long run.