ChatGPT for link building

March 3, 2023

img]

I’m fascinated by the potential of ChatGPT to improve link building tactics, focus efforts and deliver efficient campaigns.

After performing over 1,000 prompts and using the tool for client campaigns, I’m certain ChatGPT should find its way into your SEO tool stack. Below, I’ll cover how the AI chatbot can be used for link building processes.

To set up this article, I had ChatGPT write me a nut graph:

“The use of AI-powered tools like ChatGPT is becoming increasingly popular in the world of SEO, and for good reason. In this article, I’ll be showing you how ChatGPT can be used for link building and why it’s a valuable addition to your link building toolkit.” – ChatGPT, prompted by Kevin Rowe

Yes, I think I will do just that.

Before we jump in, I want to explain the anatomy of a prompt.

Can ChatGPT assist with an article on link building for SEO?

When planning and executing campaigns, I use established models referenced in reputable books and universities. I can’t memorize them all, so I frequently have to review my book notes or search Google to find templates. This typically involves 5+ websites and several clicks for at least an hour of research.

Let’s see how ChatGPT does at recommending a template for this article.

Prompt: “What are 5 contemporary case study templates from highly authoritative sources?”

I don’t hate these results, but I need more direction to make this usable and actionable.

Prompt: “Which of these case study templates are best for a 1500-word article on link building?”

I don’t know if I completely believe this, but I’m no quitter.

Prompt: “Write an outline for that article about using ChatGPT for link building, using that template.”

This isn’t a lousy outline for less than 5 minutes of work. Although it requires the touch of a subject matter expert (SME), and the headings could use some SEO love.

Let’s follow this structure for my article and see how it goes.

Using ChatGPT for link building: A case study

Case study overview

The importance of using ChatGPT for link building

After using ChatGPT for research and testing it with my team, the business case for ChatGPT is to:

Build a deeper understanding of specific tactics and tools.

Narrowly focus the direction of efforts.

Build efficiency in research, planning, and on-site content for earned links.

Introduction to ChatGPT as a potential link building tool

ChatGPT is like a junior SEO specialist who reads a lot. They may not have the experience and context to understand a technique’s impact, but you can identify many great ideas with the right prompts. But I guess that’s just being a manager and leader.

The ideal scenario would be that the entire link building process could be automated in a way that was 100% compliant with Google’s link spam guidelines and have a short-term impact that drove 3:1 ROI.

However, the realistic application is it can save time and resources at various stages of a known process or help with rapid brainstorming of new link building ideas.

To use the tool, you’ll need to develop your own prompts. This clever graphic from Rob Lennon provides a robust structure to get the most useful responses.

You don’t need to use this structure for every prompt. But as you’ll see below, a good prompt structure is helpful to uncover insights that could have taken much longer to find otherwise.

A brief background on link building

Current landscape of link building

Which link building technique should you use? This is as polarizing a question as presidential elections, at least for an SEO geek.

Here are significant considerations when creating a link building strategy.

Link building ethical considerations: A wide range of tactics can be used. All of them have different levels of risk. If you’re concerned about link building ethics, you will want to stay up-to-date on Google’s perspective on links.

Google’s POV: Google’s John Mueller suggests building great content and telling people about it through social, relevant blogs and forums, and other online or real-world communities.

I highly recommend you reference Google’s material on link building (especially if you’re in a Your Money Your Life (YMYL) niche):

Link building techniques: These can be categorized into paid, organic/natural, and earned links. Backlinko’s list of link building strategies can help you gain direction. I highly recommend customizing your own email templates for outreach.

Here is a general overview of the common approaches.

Paid links : Guest posting is popular. This includes creating content for third-party blogs or trade sites, which can be paid or organic. There is a wide range of metrics, quality standards, and contextual considerations when selecting which site to spend time working with for a placement.

Guest posting is popular. This includes creating content for third-party blogs or trade sites, which can be paid or organic. There is a wide range of metrics, quality standards, and contextual considerations when selecting which site to spend time working with for a placement. Link earning : Creating content that has a unique value proposition (UVP) or a unique selling proposition, and then sharing it with a community to see if they will like and share it. Basically, going viral. It’s a lot like product marketing but with content.

Creating content that has a unique value proposition (UVP) or a unique selling proposition, and then sharing it with a community to see if they will like and share it. Basically, going viral. It’s a lot like product marketing but with content. Organic/natura l: Digital PR : Using tools like HARO to secure links through journalist outreach. Link reclamation : Includes unlinked brand mentions or unlinked thematic mentions (i.e., finding articles with a contextual but unlinked keyword).

l: Niche tactics: Then there’s a variation of tactics like infographic distribution, unique data-driven articles, coining new phrases, widgets and calculators, and so much more.

The limitations of current link building techniques

Following ChatGPT’s template results in a wordy article, so let’s keep the rest of this simple.

No link building tactic is guaranteed to work. They are all time-consuming. The economy of link building is the biggest limiting factor.

Limitations of ChatGPT

Refer to this list of limitations from OpenAI before using ChatGPT for real business applications.

Challenges with SEO-specific link building

Earning links is among the hardest facets of an SEO or content marketing program.

After building thousands of links, I’ve found the biggest challenges with running a scalable and impactful campaign include the following:

Evaluating site quality

Content and site quality are critical to SEO. Algorithm updates like Penguin and the December 2022 link spam update powered by SpamBrain make it so.

65% of marketers use third-party metrics such as domain authority, domain rating, or page authority to evaluate site quality which Google does not use. Instead, the search quality rater guidelines point to factors such as contextuality, reputation, usability, and the quality of the site or contributors as an entity for such evaluations.

Also, note that this document was updated on December 15, 2022. ChatGPT has no information about recent changes because it is not connected to the internet.

Evaluating impact

Determining which factor drove your KPIs forward is difficult on both domain and page levels. Google has stated it devalues links to a site and doesn’t demote the site if it’s caught performing link spam. This creates major challenges for analysis.

Scalability

On-site content creation and off-site link research, prospecting, and outreach are mind-numbingly resource-intensive and time-consuming.

Get the daily newsletter search marketers rely on. Processing…Please wait. SUBSCRIBE See terms.

ChatGPT as a solution for link building

Looking at the challenges of link building, I’ve found a few applications to solve these massive hurdles.

Evaluating site quality

Unfortunately, ChatGPT can’t evaluate site quality for you. This simplifies the process of interpreting Google’s guidelines on site quality.

Prompt: “Create a list of criteria to evaluate site quality for link building against Google’s Quality Quality Evaluator Guidelines.”

Let’s dive deeper.

Well, well…

But how do you identify expertise and reputation? I asked to “elaborate on expertise and reputation” and got:

Not very algorithmic, but for those who want to do link building that complies with Google’s guidelines, this can be useful.

So let’s get more specific and find software to help.

OK. Now I need a step-by-step process to use a tool. Let’s use Ahrefs.

This does have some initial value and will provide direction for people new to using Ahrefs for link building, but this is great for initial direction.

Evaluating impact

In the same line of thinking, ChatGPT can give you some tool recommendations.

However, with general prompts like, “What tools are best to evaluate a site’s ranking improvement in Google.”

I used the word “best” which is not the most concise language, but let’s see where it takes me.

I love Serpstat, so let’s see if I can quickly identify a process to evaluate my keyword rankings.

There you have it. I’m certain we can go much deeper into niche tools and processes with these types of prompts.

Scalability

I’ve been attempting to completely automate natural link building techniques, to no satisfaction.

You know what would be amazing? If ChatGPT could help with broken link building.

This looks amazing at first glance and would have saved me hours of research, but none of these links are useful.

I thought this would be clever if it worked, but this list was of internal links or to pages that had no links pointing to them. This doesn’t help with broken link building.

However, with some prompting, I was able to identify a tool (Ahrefs again) and get answers for how to find the broken link reports…

And:

Not bad.

You can also save time with “act as” and the “step-by-step” prompts.

Let’s see If I can use these prompts to understand John Mueller’s perspective on link building and have Brian Dean write me a strategy.

First, let’s see what Mueller has to say about how to build links.

Now I need to design a link building program with a strong foundation in great content (The response above suggests creating viral content, which is its own article).

Prompt: “Write me a strategy for SEO link building as if you were Brian Dean from Backlinko.”

I’ll take this and see if I can make it more actionable.

The step-by-step hack doesn’t provide the exact steps to the point where I can follow just this plan, but I was able to narrow down and find some direction in under 5 minutes of prompting.

ChatGPT for link building – a promising use case

Content marketing and link building are different from programming. There isn’t much published research from reliable sources to train against.

When you have people to reference, known strategies and specific tools or processes, ChatGPT can potentially reduce research time.

I hope to see ChatGPT provide more valuable automation and scalability. But right now, it’s a great assistant to shorten the research and planning process.

Consider creating your own prompts to show you how to use general SEO/link building specific tools or quick references for established processes.

OpenAI will let developers build ChatGPT into their apps

img]

OpenAI will let developers build ChatGPT into their apps The company also launched a more powerful API for its Whisper speech transcription tool.

OpenAI, the company behind ChatGPT and DALL-E 2, announced several significant changes today. First, it’s launching developer APIs for ChatGPT and the Whisper speech-transcription model. It also changed its terms of service to let developers opt out of using their data for improvements while adding a 30-day data retention policy.

The new ChatGPT API will use the same AI model (“gpt-3.5-turbo”) as the popular chatbot, allowing developers to add either unchanged or flavored versions of ChatGPT to their apps. Snap’s My AI is an early example, along with a new virtual tutor feature for the online study tool Quizlet and an upcoming Ask Instacart tool in the popular local-shopping app. However, the API won’t be limited to brand-specific bots mimicking ChatGPT; it can also power “non-chat” software experiences that could benefit from AI brains.

The ChatGPT API is priced at $0.002 per 1,000 tokens (about 750 words). Additionally, it’s offering a dedicated-capacity option for deep-pocketed developers who expect to use more tokens than the standard API allows. The new developer options join the consumer-facing ChatGPT Plus, a $20-per-month service launched in February.

Turn on browser notifications to receive breaking news alerts from Engadget You can disable notifications at any time in your settings menu. Not now Turned on Turn on

Meanwhile, OpenAI’s Whisper API is a hosted version of the open-source Whisper speech-to-text model it launched in September. “We released a model, but that actually was not enough to cause the whole developer ecosystem to build around it,” OpenAI president and co-founder Greg Brockman told TechCrunch on Tuesday. “The Whisper API is the same large model that you can get open source, but we’ve optimized to the extreme. It’s much, much faster and extremely convenient.” The transcription API will cost developers $0.006 per minute, enabling “robust” transcription in multiple languages and providing translation to English.

Finally, OpenAI revealed changes to its developer terms based on customer feedback about privacy and security concerns. Unless a developer opts in, the company will no longer use data submitted through the API for “service improvements” to train its AI models. Additionally, it’s adding a 30-day data retention policy while providing stricter retention options “depending on user needs” (likely meaning high-usage companies with budgets to match). Finally, it’s simplifying its terms surrounding data ownership, clarifying that users own the models’ input and output.

The company will also replace its pre-launch review process for developers with a mostly automated system. OpenAI justified the change by pointing out that “the overwhelming majority of apps were approved during the vetting process,” claiming its monitoring has “significantly improved.” “One of our biggest focuses has been figuring out, how do we become super friendly to developers?” Brockman said to TechCrunch. “Our mission is to really build a platform that others are able to build businesses on top of.”

OpenAI launches an API for ChatGPT, plus dedicated capacity for enterprise customers

img]

To call ChatGPT, the free text-generating AI developed by San Francisco-based startup OpenAI, a hit is a massive understatement.

As of December, ChatGPT had an estimated more than 100 million monthly active users. It’s attracted major media attention and spawned countless memes on social media. It’s been used to write hundreds of e-books in Amazon’s Kindle store. And it’s credited with co-authoring at least one scientific paper.

But OpenAI, being a business — albeit a capped-profit one — had to monetize ChatGPT somehow, lest investors get antsy. It took a step toward this with the launch of a premium service, ChatGPT Plus, in February. And it made a bigger move today, introducing an API that’ll allow any business to build ChatGPT tech into their apps, websites, products and services.

An API was always the plan. That’s according to Greg Brockman, the president and chairman of OpenAI (and also one of the co-founders). He spoke with me yesterday afternoon via a video call ahead of the launch of the ChatGPT API.

“It takes us a while to get these APIs to a certain quality level,” Brockman said. “I think it’s kind of this, like, just being able to meet the demand and the scale.”

Brockman says the ChatGPT API is powered by the same AI model behind OpenAI’s wildly popular ChatGPT, dubbed “gpt-3.5-turbo.” GPT-3.5 is the most powerful text-generating model OpenAI offers today through its API suite; the “turbo” moniker refers to an optimized, more responsive version of GPT-3.5 that OpenAI’s been quietly testing for ChatGPT.

Priced at $0.002 per 1,000 tokens, or about 750 words, Brockman claims that the API can drive a range of experiences, including “non-chat” applications. Snap, Quizlet, Instacart and Shopify are among the early adopters.

The initial motivation behind developing gpt-3.5-turbo might’ve been to cut down on ChatGPT’s gargantuan compute costs. OpenAI CEO Sam Altman once called ChatGPT’s expenses “eye-watering,” estimating them at a few cents per chat in compute costs. (With over a million users, that presumably adds up quickly.)

But Brockman says that gpt-3.5-turbo is improved in other ways.

“If you’re building an AI-powered tutor, you never want the tutor to just give an answer to the student. You want it to always explain it and help them learn — that’s an example of the kind of system you should be able to build [with the API],” Brockman said. “We think this is going to be something that will just, like, make the API much more usable and accessible.”

The ChatGPT API underpins My AI, Snap’s recently announced chatbot for Snapchat+ subscribers, and Quizlet’s new Q-Chat virtual tutor feature. Shopify used the ChatGPT API to build a personalized assistant for shopping recommendations, while Instacart leveraged it to create Ask Instacart, an upcoming toll that’ll allow Instacart customers to ask about food and get “shoppable” answers informed by product data from the company’s retail partners.

“Grocery shopping can require a big mental load, with a lot of factors at play, like budget, health and nutrition, personal tastes, seasonality, culinary skills, prep time, and recipe inspiration,” Instacart chief architect JJ Zhuang told me via email. “What if AI could take on that mental load, and we could help the household leaders who are commonly responsible for grocery shopping, meal planning, and putting food on the table — and actually make grocery shopping truly fun? Instacart’s AI system, when integrated with OpenAI’s ChatGPT, will enable us to do exactly that, and we’re thrilled to start experimenting with what’s possible in the Instacart app.”

Those who’ve been closely following the ChatGPT saga, though, might be wondering if it’s ripe for release — and rightly so.

Early on, users were able to prompt ChatGPT to answer questions in racist and sexist ways, a reflection of the biased data on which ChatGPT was initially trained. (ChatGPT’s training data includes a broad swath of internet content, namely e-books, Reddit posts and Wikipedia articles.) ChatGPT also invents facts without disclosing that it’s doing so, a phenomenon in AI known as hallucination.

ChatGPT — and systems like it — are susceptible to prompt-based attacks as well, or malicious adversarial prompts that get them to perform tasks that weren’t a part of their original objectives. Entire communities on Reddit have formed around finding ways to “jailbreak” ChatGPT and bypass any safeguards that OpenAI put in place. In one of the less offensive examples, a staffer at startup Scale AI was able to get ChatGPT to divulge information about its inner technical workings.

Brands, no doubt, wouldn’t want to be caught in the crosshairs. Brockman is adamant they won’t be. Why so? One reason, he says, is continued improvements on the back end — in some cases at the expense of Kenyan contract workers. But Brockman emphasized a new (and decidedly less controversial) approach that OpenAI calls Chat Markup Language, or ChatML. ChatML feeds text to the ChatGPT API as a sequence of messages together with metadata. That’s as opposed to the standard ChatGPT, which consumes raw text represented as a series of tokens. (The word “fantastic” would be split into the tokens “fan,” “tas” and “tic,” for example.)

For example, given the prompt “What are some interesting party ideas for my 30th birthday?” a developer can choose to append that prompt with an additional prompt like “You are a fun conversational chatbot designed to help users with the questions they ask. You should answer truthfully and in a fun way!” or “You are a bot” before having the ChatGPT API process it. These instructions help to better tailor — and filter — the ChatGPT model’s responses, according to Brockman.

“We’re moving to a higher-level API. If you have a more structured way of representing input to the system, where you say, ‘this is from the developer’ or ‘this is from the user’ … I should expect that, as a developer, you actually can be more robust [using ChatML] against these kinds of prompt attacks,” Brockman said.

Another change that’ll (hopefully) prevent unintended ChatGPT behavior is more frequent model updates. With the release of gpt-3.5-turbo, developers will by default be automatically upgraded to OpenAI’s latest stable model, Brockman says, starting with gpt-3.5-turbo-0301 (released today). Developers will have the option to remain with an older model if they so choose, though, which might somewhat negate the benefit.

Whether they opt to update to the newest model or not, Brockman notes that some customers — mainly large enterprises with correspondingly large budgets — will have deeper control over system performance with the introduction of dedicated capacity plans. First detailed in documentation leaked earlier this month, OpenAI’s dedicated capacity plans, launched today, let customers pay for an allocation of compute infrastructure to run an OpenAI model — for example, gpt-3.5-turbo. (It’s Azure on the back end, by the way.)

In addition to “full control” over the instance’s load — normally, calls to the OpenAI API happen on shared compute resources — dedicated capacity gives customers the ability to enable features such as longer context limits. Context limits refer to the text that the model considers before generating additional text; longer context limits allow the model to “remember” more text essentially. While higher context limits might not solve all the bias and toxicity issues, they could lead models like gpt-3.5-turbo to hallucinate less.

Brockman says that dedicated capacity customers can expect gpt-3.5-turbo models with up to a 16k context window, meaning they can take in four times as many tokens as the standard ChatGPT model. That might let someone paste in pages and pages of tax code and get reasonable answers from the model, say — a feat that’s not possible today.

Brockman alluded to a general release in the future, but not anytime soon.

“The context windows are starting to creep up, and part of the reason that we’re dedicated-capacity-customers-only right now is because there’s a lot of performance tradeoffs on our side,” Brockman said. “We might eventually be able to offer an on-demand version of the same thing.”

Given OpenAI’s increasing pressure to turn a profit after a multibillion-dollar investment from Microsoft, that wouldn’t be terribly surprising.

OpenAI’s ChatGPT & Whisper API Now Available For Developers

img]

OpenAI has announced that its ChatGPT and Whisper models are now available on its API, offering developers access to AI-powered language and speech-to-text capabilities.

Through system-wide optimizations, OpenAI has managed to reduce the cost of ChatGPT by 90% since December, and it is now passing these savings on to API users.

OpenAI believes the best way to realize the full potential of AI is to allow everyone to build with it.

The changes announced today can lead to numerous applications that everyone can benefit from.

More businesses can leverage OpenAI’s language and speech-to-text capabilities to develop next-generation apps powered by ChatGPT and Whisper.

Further, OpenAI has taken into account feedback from developers and made changes to its API terms of service to suit their needs better.

ChatGPT API

OpenAI is releasing a new ChatGPT model family called gpt-3.5-turbo, priced at $0.002 per 1k tokens, making it ten times cheaper than the existing GPT-3.5 models.

This model is ideal for many non-chat use cases and is the same in the ChatGPT product.

While GPT models traditionally consume unstructured text represented as a sequence of tokens, ChatGPT models consume a sequence of messages with metadata.

However, the input is rendered to the model as a sequence of tokens for the model to consume.

The gpt-3.5-turbo model uses a new format called Chat Markup Language (ChatML).

ChatGPT Upgrades

OpenAI continuously improves its ChatGPT models and aims to offer these upgrades to developers.

Those who use the gpt-3.5-turbo model will always receive the recommended stable model, while still being able to choose a specific version.

OpenAI is launching a new version called gpt-3.5-turbo-0301, which will receive support until at least June 1st, and a new stable release is expected in April.

Developers can find updates on the models page for switching to the latest version.

Dedicated Instances

OpenAI now offers dedicated instances for users who want more control over their model versions and system performance.

By default, requests are processed on shared compute infrastructure, and users pay per request.

However, with dedicated instances, developers pay for a time period to allocate compute infrastructure reserved exclusively for their requests.

Developers have complete control over the instance’s load, the option to enable longer context limits, and the ability to pin the model snapshot.

Dedicated instances can be cost-effective for developers who process beyond approximately 450M tokens per day.

Whisper API

OpenAI introduced Whisper, a speech-to-text model, as an open-source API in September 2022.

The Whisper API has garnered considerable praise from the developer community. However, it can be challenging to operate.

OpenAI is making the large-v2 model available through its API, providing developers with convenient on-demand access, priced at $0.006 per minute.

Additionally, OpenAI’s serving stack guarantees faster performance compared to other services. The Whisper API is accessible through transcriptions or translation endpoints, which can transcribe or translate the source language into English.

Developer Focus

OpenAI has made specific changes after receiving developer feedback. Examples of those changes include the following:

Not using data submitted through the API for service improvements, including model training, unless the organization consents to it.

Establishing a default 30-day data retention policy, with the option for stricter retention depending on the user’s needs.

Improving its developer documentation.

Simplifying its Terms of Service and usage policies.

OpenAI recognizes that providing reliable service is necessary to guarantee AI benefits everyone. To that end, OpenAI is committed to improving its uptime over the next few months.

Featured Image: Shaheerrr/Shutterstock

Source: OpenAI

Elon Musk Building Rival of AI Chatbot ChatGPT, Calls It…

img]

In recent months, Elon Musk has raised concerns about the growing capabilities of AI.

Billionaire Elon Musk is develop a new lb to create an alternative to ChatGPT, the artificial intelligence chatbot, which he said is too “woke”, according to Vice News. Mr Musk was one of the original founders of OpenAI, the parent company of ChatGPT, but left in 2018 after disagreements with the management. In recent months, he has been criticising the company and its product, including ChatGPT. The chatbot was launched in November last year and since then, has make waves across the world.

The Vice report is based on an interview of a researchers published in The Information. Igor Babuschkin, who left Google’s DeepMind AI unit, has been recruited by Mr Musk to lead the development of the rival chatbot.

Talking about the project, Mr Babuschkin told The Information, “The goal is to improve the reasoning abilities and the factualness of these language models. That includes making sure the model’s responses are more trustworthy and reliable.”

However, the project is still in very early stage and not much details are available.

The Twitter chief has given some hints about what could be the name of the new chatbot. On Tuesday, he tweeted “BasedAI”. A day later, he posted a meme depicting “Woke AI and Closed AI” battling, and then “Based AI” as a Shiba Inu coming in with a baseball bat scaring both “Woke AI and Closed AI” away.

On the same day, he spoke at a presentation to a Tesla investors about company plans and said, ‘‘AI stresses me out.’’

When asked by an analyst if AI could help Tesla build cars, Mr Musk took a less optimistic line.

“I don’t see AI helping us make cars any time soon,” he said. “At that point … there’s no point in any of us working.”

After users posted about how ChatGPT is helping them draft prose, poetry and computer code, Mr Musk had raised a red flag calling the AI ‘‘dangerously strong’’.

The inside story of how ChatGPT was built from the people who made it

img]

Sandhini Agarwal: We have a lot of next steps. I definitely think how viral ChatGPT has gotten has made a lot of issues that we knew existed really bubble up and become critical—things we want to solve as soon as possible. Like, we know the model is still very biased. And yes, ChatGPT is very good at refusing bad requests, but it’s also quite easy to write prompts that make it not refuse what we wanted it to refuse.

Liam Fedus: It’s been thrilling to watch the diverse and creative applications from users, but we’re always focused on areas to improve upon. We think that through an iterative process where we deploy, get feedback, and refine, we can produce the most aligned and capable technology. As our technology evolves, new issues inevitably emerge.

Sandhini Agarwal: In the weeks after launch, we looked at some of the most terrible examples that people had found, the worst things people were seeing in the wild. We kind of assessed each of them and talked about how we should fix it.

Jan Leike: Sometimes it’s something that’s gone viral on Twitter, but we have some people who actually reach out quietly.

Sandhini Agarwal: A lot of things that we found were jailbreaks, which is definitely a problem we need to fix. But because users have to try these convoluted methods to get the model to say something bad, it isn’t like this was something that we completely missed, or something that was very surprising for us. Still, that’s something we’re actively working on right now. When we find jailbreaks, we add them to our training and testing data. All of the data that we’re seeing feeds into a future model.

Jan Leike: Every time we have a better model, we want to put it out and test it. We’re very optimistic that some targeted adversarial training can improve the situation with jailbreaking a lot. It’s not clear whether these problems will go away entirely, but we think we can make a lot of the jailbreaking a lot more difficult. Again, it’s not like we didn’t know that jailbreaking was possible before the release. I think it’s very difficult to really anticipate what the real safety problems are going to be with these systems once you’ve deployed them. So we are putting a lot of emphasis on monitoring what people are using the system for, seeing what happens, and then reacting to that. This is not to say that we shouldn’t proactively mitigate safety problems when we do anticipate them. But yeah, it is very hard to foresee everything that will actually happen when a system hits the real world.

In January, Microsoft revealed Bing Chat, a search chatbot that many assume to be a version of OpenAI’s officially unannounced GPT-4. (OpenAI says: “Bing is powered by one of our next-generation models that Microsoft customized specifically for search. It incorporates advancements from ChatGPT and GPT-3.5.”) The use of chatbots by tech giants with multibillion-dollar reputations to protect creates new challenges for those tasked with building the underlying models.

< 回到列表