Are ‘prompt engineer’ jobs real? Yes, but maybe not for long.

Share

It’s the fantasy of Stable Diffusion image addicts, ChatGPT tinkerers, and everyone else who can’t get enough of the new crop of AI content generation toys — I mean tools: to get rich just by playing around with AI.

“Ladies and gentleman it is happening: ‘Prompt engineer’ is now a job title, and it pays between 175k to 300k a year.” So claims TikTok user @startingname in a calm but definitive tone in his March TikTok post. To qualify for the job, he says, one just has to “spend time in the algorithm.” In other words, it’s a six-figure job for people who enjoy tinkering with generative AI.

The New York Times called prompt engineering “a skill that those who play around with ChatGPT long enough can add to their résumés.” The Washington Post called it “Tech’s hottest new job.” Sinem Buber, lead economist at ZipRecruiter, told CNN that thanks to prompt engineering, “there will be more jobs created because of ChatGPT.”

Is this pure hype? Not exactly. There are real companies seeking prompt engineers and offering generous pay packages. Most notable among these is Anthropic, home of the large language model Claude, which is famously hiring a prompt engineer, and offering the eye-popping six-figure sums @startingname referred to in his TikTok video. But if you’re coming down with a case of prompt engineering fever, take a sip of water, breathe, and know that most prompt engineer jobs aren’t really all that lucrative, and the ones that are might not exist for very long.

So before you throw your application to medical school in the shredder in favor of your dream job making hilarious Pope pictures with Midjourney, here’s what our deep dive into prompt engineer jobs turned up.

What is a prompt engineer, and why are these jobs so exciting?

Victor Sanh, a researcher at the AI company Huggingface, which performs studies and builds applications related to AI, was one of the first scholarly researchers to approach prompt engineering, and he broke it down for Mashable:

“Systems such as ChatGPT, GPT-4, Claude, and others that have undergone Reinforcement Learning from Human Feedback (RLHF) tend to be more resilient to the formulation of prompts [and] instructions,” he said, meaning these systems basically understand what they’re told in plain language — that’s the whole point. But, Sanh went on, “they frequently make mistakes, either failing to comprehend the query correctly or failing to recognize that the query is underspecified, making it impossible to provide a proper response.” That’s where prompt engineering comes in.

SEE ALSO:

Users who spot bugs in ChatGPT can now make up to $20,000

According to Sanh, another technique exists called chain-of-thoughts prompting, in which a query must be broken down into sub-queries. Sanh called this “prompt engineering on steroids.”

If you’re attracted to the idea of being a prompt engineer, this might sound wearyingly technical. And there’s no more attractive topic in tech right now than generative AI. Starting about five years ago it felt like all the tech news oxygen was getting sucked away by a big shiny object called crypto, which average people found inscrutable and dreary. But ever since OpenAI released ChatGPT late last year, there’s been a whole new shiny object in tech, and this one is much easier for people — including kids — to understand and get excited about. If you consider ChatGPT an app, it is perhaps the most viral app of all time.

So to one of the hundreds of millions of people who love playing with generative AI, the idea of being a prompt engineer as a job, probably sounds a little like being a video game tester as a job — before you find out that being a video game tester can be an underpaid, exhausting, slog, that is.

Unfortunately, if you get one of these jobs — meaning you’re one of the few people with “prompt engineer” on their résumé lucky enough to do it full time in exchange for actual money — you will most likely spend eight hours per day spotting when a system is, as Sanh put it, “failing to comprehend the query correctly or failing to recognize that the query is underspecified,” and devising ways to make a given system produce a “proper response.” That may sound like drudgery, or it might sound rewarding, but it’s the meat of what you’ll be doing.

Someone’s life may depend on you doing a good job as a prompt engineer.

Benjamin Rader, graduate research fellow in the Innovation and Digital Health Accelerator at Boston Children’s Hospital is part of a team that is hiring a prompt engineer. Generative AI can, according to Rader “help a doctor generate notes faster.” Boston Children’s Hospital, he explained, needs “someone who can help us refine the asks of the generative AI, so the information we’re spitting out is limited in hallucinations, and specific to each specific task.”

If you were imagining this would be a matter of making ChatGPT write good knock-knock jokes, think again. “Each of these tasks is going to be really niche,” Rader said. Their internal system isn’t trained on Wikipedia and Reddit, but instead by sensitive notes about diagnoses, patient billing information, and comments between medical professionals about a given patient in an extremely specific context. “We’re an institution that often serves very specialized care,” he said.

Some prompt engineer jobs appear much less specialized, but also less lucrative

It’s worth noting that in some places, the term “prompt engineer” has a somewhat looser definition than Sanh’s. One current ad on Upwork, for instance, is looking for, essentially, someone to generate boatloads of articles about cannabis. In the content marketing an SEO worlds, jobs in which people grind out many, many words per day are not new (the writer of the article you’re reading got his start at such a job), and this job posting seeking someone to use GPT-3 and Jasper to produce “800-1,500 word blog posts on various topics related to Hemp, Cannabis, and CBD,” is clearly a hi-tech version of those. This is not, it should go without saying, one of those fabled six-figure jobs in the AI field.

What are the qualifications of a good prompt engineer?

This career field is too new for any sort of credentialing process to stand in the way of qualified applicants. In fact, Anthropic’s ad makes it sound like they’ll consider whichever randos are able to make strong enough cases for themselves. “If you haven’t done much in the way of prompt engineering yet, you can best demonstrate your prompt engineering skills by spending some time experimenting with Claude or GPT3 and showing that you’ve managed to get complex behaviors from a series of well crafted prompts,” the ad says. And hey, anyone can do that, as long as they have access to one of the models.

But Matt Bell, a member of the technical staff at Anthropic, suggested to Mashable that a background in coding and machine learning — the process by which models are “trained” — is going to be a big help for applicants. “Having coding knowledge becomes useful when creating systematic evaluations of prompts, and helps with prompting for coding tasks,” Bell said in an email. Additionally, “having an ML [machine learning] background can be helpful for understanding the overall strengths and weaknesses of these models.”

Still, Bell hastened to add that the actual act of prompt engineering doesn’t involve typing out code, and that Anthropic’s “best prompter is a philosopher.”

Indeed, “It takes logic and reasoning to understand how an AI is going to respond,” said Boston Children’s Hospital’s Benjamin Rader. So brush up on your Aristotle, applicants.

Are these jobs going to last well into the future?

Sanh, the researcher at Huggingface, believes the role of prompt engineer is temporary, explaining that the fragility of AI systems is a well-known issue that the companies and organizations producing these models are working to mitigate. After all, in the end, you’re not supposed to interact with a chatbot by typing out a complicated string of stock phrases. You’re supposed to just converse with it.

Sanh compared some prompt engineers working directly with models to “the PR department preparing someone for a media interview.” Their role is to guide the system, sort of like an un-polished candidate for elected office, so that it will “behave properly.” He pointed to Microsoft’s Bing as an example. Bing’s chatbot famously melted down to New York Times columnist Kevin Roose, and then got rolled out to a wider audience in a much more palatable form, minus the meltdowns. That transformation, according to Sanh, probably involved an epic feat of prompt engineering, because it, “likely required a lot of interactions with the system to refine.”

On the other hand, Rader’s team at Boston Children’s Hospital isn’t training a chatbot, but an automated information processing system for a hospital. “In a healthcare system,” he said, “having a trained person interacting with the AI is going to be important for a really long time. That might not be the case for all fields, but certainly the case for a healthcare system.”

In any case, lucrative new AI jobs exist now. Some of them may not exist in a few years, but thanks to AI, a lot of jobs may not exist in a few years. Get in while the getting’s good.

Source :

Are ‘prompt engineer’ jobs real? Yes, but maybe not for long.