AI does some good

JMCx4

Censorship is the Sincerest Form of Flattery
Sep 3, 2017
13,772
8,604
St. Louis, MO
It's a tool in a toolbox. But if you can't swing a hammer, you shouldn't mess with 21st Century tech either.
 

beowulf

Not a nice guy.
Jan 29, 2005
59,422
9,019
Ottawa
It's a tool in a toolbox. But if you can't swing a hammer, you shouldn't mess with 21st Century tech either.
Sadly some of the doom and gloom might come to fruition unless government regulates some stuff. A lot of jobs could be lost as AI is used more and more or basically takes over in many sectors.
 

JMCx4

Censorship is the Sincerest Form of Flattery
Sep 3, 2017
13,772
8,604
St. Louis, MO
Sadly some of the doom and gloom might come to fruition unless government regulates some stuff. A lot of jobs could be lost as AI is used more and more or basically takes over in many sectors.
Seems nobody wants to work any more these days, so it's a match made in AI heaven.
 

tarheelhockey

Offside Review Specialist
Feb 12, 2010
85,322
139,060
Bojangles Parking Lot
Sadly some of the doom and gloom might come to fruition unless government regulates some stuff. A lot of jobs could be lost as AI is used more and more or basically takes over in many sectors.

Good luck convincing capitalist governments that they need to regulate industry to keep obsolete employees on the payroll.

IMO the more likely thing is that AI becomes the culmination of a new industrial revolution. There flat-out are not going to be enough productive jobs in existence. There is nothing that a person can do to out-think an advanced AI, any more than they could out-work automation in the factories. Wide-scale unemployment will be inevitable unless we go the route of New Deal II, paying people to dig ditches and then fill them back in.

So we end up resetting our expectations. In a world where there are way more people than productive roles, all people cannot be measured according to their productivity.
 

beowulf

Not a nice guy.
Jan 29, 2005
59,422
9,019
Ottawa
Good luck convincing capitalist governments that they need to regulate industry to keep obsolete employees on the payroll.

IMO the more likely thing is that AI becomes the culmination of a new industrial revolution. There flat-out are not going to be enough productive jobs in existence. There is nothing that a person can do to out-think an advanced AI, any more than they could out-work automation in the factories. Wide-scale unemployment will be inevitable unless we go the route of New Deal II, paying people to dig ditches and then fill them back in.

So we end up resetting our expectations. In a world where there are way more people than productive roles, all people cannot be measured according to their productivity.
I don't disagree but unless people get paid to stay home and do other things this good lead to major upheaval around the world and dare I say riots or more.

A time might come for a basic income and real taxation on companies to pay for it. Could a future where 2-3 people share 1 job and get paid a basic income from the government to make up the difference be far away?
 

tarheelhockey

Offside Review Specialist
Feb 12, 2010
85,322
139,060
Bojangles Parking Lot
I don't disagree but unless people get paid to stay home and do other things this good lead to major upheaval around the world and dare I say riots or more.

Yes, that's what I mean by a revolution. Think of the impacts that the Industrial Revolution had on, say, India. The impacts that it had on urban vs rural dynamics everywhere. A lot of it was very ugly. Places that failed to go through the transition are still struggling to catch up.

This will look very different in the next cycle, but it will have a comparable overall impact. Entire economies are going to need to re-calibrate. A lot of people stand to be left out in the cold after generations of prosperity, and a lot of other people stand to plunge into deeper poverty, and that always leads to upheaval. It's incredibly important for governments to be thinking about this and heading off the issue before it catalyzes. This is the sort of environment that demagogues thrive in.


A time might come for a basic income and real taxation on companies to pay for it. Could a future where 2-3 people share 1 job and get paid a basic income from the government to make up the difference be far away?

From the lens we currently exist in, the most evident solution would be some form of UBI. I'm not sure we will be thinking the same way in 30 years, but it's a good start for the conversation. With UBI, people can better afford to take non-productive roles which add qualitative value rather than quantitative value.

Another thing to consider is that a person's productivity could be measured in terms of their consumption. The government could incentivize people to be consumers in order to inflate demand, which opens up opportunity for them to become productive in fulfilling demand.

But the greater point is that, no matter how we slice this, we are very quickly closing in on a world where the combination of AI + Automation will make human labor truly obsolete, and there will be no emerging field of productive labor which only humans can occupy. We're about halfway there already, but the tipping point is coming fast.
 

JMCx4

Censorship is the Sincerest Form of Flattery
Sep 3, 2017
13,772
8,604
St. Louis, MO
Interesting AI scenario briefed @ the British Royal Aeronautical Society's Future Combat Air & Space Capabilities Summit ... :huh:

AI – is Skynet here already?​

skyborg-usaf-web.jpg


Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton. ...
 
  • Like
Reactions: beowulf

tarheelhockey

Offside Review Specialist
Feb 12, 2010
85,322
139,060
Bojangles Parking Lot
Interesting AI scenario briefed @ the British Royal Aeronautical Society's Future Combat Air & Space Capabilities Summit ... :huh:

This is the best part:

"It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”


See, an upset human could never be that clever. AI really is an improvement.
 

kihei

McEnroe: The older I get, the better I used to be.
Jun 14, 2006
42,754
10,297
Toronto
Amazed some of this stuff was released with so few guardrails. To badly mix metaphors, how do you put safety measures on a cat that is already out of the bag?
 

NyQuil

Big F$&*in Q
Jan 5, 2005
95,867
60,298
Ottawa, ON
See, an upset human could never be that clever. AI really is an improvement.

Reminds me of this:

"While at the funeral of her own mother, she met a guy whom she did not know. She thought this guy was amazing, so much her dream guy she believed him to be, that she fell in love with him then and there. A few days later, the girl killed her own sister. What is her motive in killing her sister?"

"She was hoping the guy would come to her sister's funeral."

It's that apocryphal psychopathy test question.
 
  • Haha
Reactions: TheGreenTBer

Hippasus

1,9,45,165,495,1287,
Feb 17, 2008
5,616
346
Bridgeview
Self-conscousnesss is not amenable to GPT-4 and-or more advanced AI frameworks. I think unique subjective experience might not be reproducible by automation or algorithmic procedures. This is inspired by Edward Frenkel.
 

Ad

Upcoming events

Ad

Ad