See also: Parrots, Paperclips and Safety vs. Ethics: Why the Artificial Intelligence Debate Sounds Like a Foreign Language
Here is a list of some terms used by AI insiders:
AGI — AGI stands for “artificial general intelligence”. As a concept, it is used to refer to AI that is significantly more advanced than currently possible, which can do most things as well as or better than most humans, including self-improvement.
Example: “To me, AGI is the equivalent of a mid-human that you could hire as a co-worker, and they could say do whatever you’d be happy for a remote co-worker to do behind a computer,” said Sam Altman during a recent event Greylock VC.
AI Ethics describes the desire to prevent AI from causing immediate harm and often focuses on issues such as how AI systems collect and process data and the possibility of bias in areas such as housing or l ‘job.
AI Security describes the longer-term fear that AI will advance so suddenly that super-intelligent AI could harm or even eliminate humanity.
Alignment is the practice of tweaking an AI model so that it produces the results intended by its creators. In the short term, alignment refers to the practice of building software and moderating content. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly to humanity.
Example: “What these systems align with – who owns the values, what are these boundaries – that is sort of defined by society as a whole, by governments. And so, creating this dataset, our alignment dataset, it could be, an AI constitution is that it has to come very broadly from society,” Sam Altman said last week during the Senate hearing.
Emergent behavior — Emergent behavior is the technical way of saying that certain AI models exhibit capabilities that were not originally intended. It can also describe the startling results of AI tools deployed widely to the public.
Example: “Even at first, however, GPT-4 challenges a considerable number of widely held assumptions about artificial intelligence, and exhibits emerging behaviors and capabilities whose sources and mechanisms are, for the moment , difficult to accurately discern,” Microsoft researchers said. written in Sparks of Artificial General Intelligence.
Fast takeoff or hard takeoff — A phrase that suggests that if someone succeeds in building an AGI, it will already be too late to save humanity.
Example: “AGI could happen soon or far in the future; the take-off speed from initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman. in a blog post.
Mousse – Another way of saying “hard take off”. It’s an onomatopoeia, and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.
Example: “It’s like you believe in the ridiculous hard-lift “foom” scenario, which makes it seem like you have no understanding of how everything works.” tweeted Yann LeCun, Head of Meta AI.
GPUs — Chips used to train models and run inferences, which are the descendants of chips used to play advanced computer games. The most used model currently is the Nvidia A100.
Example: From Stability AI founder, Emad Mostque:
Bodyguard are software and policies that big tech companies are building around AI models to make sure they don’t leak data or produce disturbing content, which is often referred to as “derailing”. It can also refer to specific applications that keep AI off topic, such as Nvidia’s “NeMo Guardrails” product.
Example: “The time for government to play a role has not passed us this period of focused public attention on AI, it is precisely the time to define and build the right safeguards to protect people and interests,” Christina Montgomery, president of IBM. The AI ethics committee and vice president of the company told Congress this week.
Inference – The act of using an AI model to make predictions or generate text, images or other content. Inference can be computationally intensive.
Example: “The problem with inference is that if the workload increases very quickly, which happened to ChatGPT. It grew to a million users in five days. There’s no way your capacity GPU can keep up with that,” Sid Sheth, founder of D-Matrix, previously told CNBC.
Large language model — A sort of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining characteristic is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that looks like it was written by a human.
Example: “Google’s new big language model, which the company announced last week, uses almost five times more training data than its 2022 predecessor, enabling it to perform coding tasks, more advanced math and creative writing,” CNBC reported earlier this week. .
paperclips are an important symbol for proponents of AI security as they symbolize the possibility that an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom on a “superintelligence” tasked with making as many paperclips as possible. He decides to turn all humans, the Earth, and growing parts of the cosmos into paperclips. The OpenAI logo is a reference to this tale.
Example: “It also seems perfectly possible to have a superintelligence whose sole purpose is something completely arbitrary, such as making as many paperclips as possible, and which would resist with all its might any attempt to alter that purpose”, Bostrom writing in his thought experiment.
Singularity is an older term not often used anymore, but it refers to when technological change becomes self-reinforcing, or when an AGI is created. It’s a metaphor – literally, the singularity refers to the tip of a black hole with infinite density.
Example: “The advent of artificial general intelligence is called a singularity because it’s so hard to predict what will happen after it,” Tesla CEO Elon Musk said in an interview with CNBC this week.
[colabot2]
Source link