
Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.
“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.
A nebulous concept
OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.
“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.
“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”


