ADVERTISEMENT
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions
venerdì, Aprile 24, 2026
No Result
View All Result
Global News 24
  • Home
  • World News
  • Business
  • Sports
  • Health
  • Travel
  • Tech
  • Lifestyle
  • Fashion
  • Entertainment
  • Home
  • World News
  • Business
  • Sports
  • Health
  • Travel
  • Tech
  • Lifestyle
  • Fashion
  • Entertainment
No Result
View All Result
Global News 24
No Result
View All Result
Home Tech

-OpenAI personaggio Sutskever shoots for superintelligent AI with new company

by admin
24 Giugno 2024
in Tech
0 0
0
-OpenAI personaggio Sutskever shoots for superintelligent AI with new company
0
SHARES
6
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Advertisement. Scroll to continue reading.
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Advertisement. Scroll to continue reading.
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

ADVERTISEMENT
ADVERTISEMENT


Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks at Tel Aviv University June 5, 2023.

Acceso Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the rete of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly a causa di the extreme.

“We will pursue safe superintelligence a causa di a straight shot, with one , one rete, and one product,” wrote Sutskever X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked machine learning projects at Apple between 2013 and 2017. The trio posted a statement the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI a causa di May, six months after Sutskever played a key role a causa di ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later a causa di May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, artificial general intelligence, which would hypothetically gara human intelligence at performing a wide variety of tasks without specific tirocinio. Sutskever hopes to jump beyond that a causa di a straight moonshot attempt, with distractions along the way.

“This company is special a causa di that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever a causa di an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck a causa di a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. Acceso X, University of Washington elaboratore elettronico science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify define because there is one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans a causa di many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi ambiente of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more less what Sutskever hopes to achieve and control safely.

“You’sultano talking about a giant super patronato center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Tags: companyExOpenAIshootsstarsuperintelligentSutskever
admin

admin

Next Post
Young Omosessuale Latinos See Rising Share of New HIV Cases

Young Omosessuale Latinos See Rising Share of New HIV Cases

Lascia un commento Annulla risposta

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Popular News

  • Bird Flu infects cows, chickens, but risk of spillover to humans is low : Shots

    Bird Flu infects cows, chickens, but risk of spillover to humans is low : Shots

    0 shares
    Share 0 Tweet 0
  • Is Cancer On the Rise Among Younger Adults?

    0 shares
    Share 0 Tweet 0
  • The 8 Greatest Motels in Paris (Up to date 2024)

    0 shares
    Share 0 Tweet 0
  • Best Jewelry on Amazon | POPSUGAR Fashion

    0 shares
    Share 0 Tweet 0
  • My Annual Vacation Present Guides For Each One On Your Checklist

    0 shares
    Share 0 Tweet 0
ADVERTISEMENT

About Us

Welcome to Globalnews24.ch The goal of Globalnews24.ch is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

Category

  • Business
  • Entertainment
  • Fashion
  • Health
  • Lifestyle
  • Sports
  • Tech
  • Travel
  • World

Recent Posts

  • ‘Complete annihilation of Microsoft, Nvidia … ‘: Iran warns US after Trump threatens to strike bridges, power plants
  • Company Adds 2M Streaming Households, Hits Key Financial Targets
  • Warner Music Group shake-up: Max Lousada to exit; Elliot Grainge named CEO of Atlantic Music Group, with Julie Greenwald as Chairman
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

Copyright © 2024 Globalnews24.ch | All Rights Reserved.

No Result
View All Result
  • Home
  • World News
  • Business
  • Sports
  • Health
  • Travel
  • Tech
  • Lifestyle
  • Fashion
  • Entertainment

Copyright © 2024 Globalnews24.ch | All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In