Marc Bergevin: We want to compete Edition

Status
Not open for further replies.

yianik

Registered User
Jun 30, 2009
10,667
6,100
I'm thinking some people here have watched the Terminator series of movies a lot. And have thought about them a lot. Personal favourite was Judgement Day. Terminator was also a good watch.

Don't know why we want to venture into trying to create a machine brain to replicate a human one. I mean why ?

Personally have no idea about the limitations, but Waffledaves comments are reassuring, to some extent. Think the likeliest scenario besides really amazing programmable robots/ drone type humanoid bots is cybernetic enhancement of humans.

And when I say really amazing programmable robots / drone bots, I am not saying that is a good thing. Imagine a government having that kind of technology and its military/ civil security application. Yikes.
 

BehindTheTimes

Registered User
Jun 24, 2018
7,090
9,347
There is fear of a "technological singularity" when it comes to AI but honestly, I don't know if we'll ever see something like that happen without some major, major changes in how computing works. Maybe one day, after we've mastered quantum computing, but even then it may not be enough. I've done a lot of work in AI, built some AI platforms and spent a good portion of my studies in that domain. Despite the amazing advancements we've made, it's still pretty damn rudimentary in the grand scheme of things. We still can't find ways to transmit information the way the neurons in our brain transmit information. We can build AIs that are VERY good at specific tasks (far better than a human). But we can't build one that can solve generalized problems the way a human can, it needs to be in a very specific context.

When I first got into it I really thought AI would be this incredibly complex thing... But in reality the majority of AI work is just complex decision trees and statistics. We have so much work to do in things like natural language processing and other "simple" tasks that the idea of a killer AI that will wipe us out will probably never happen. We'll destroy ourselves some other way before we get there.

We still can't build a bot that beats my ass in NLHE. I think we're good for a bit.
 
  • Like
Reactions: MaxPacwhereishe

waffledave

waffledave, from hf
Aug 22, 2004
33,438
15,780
Montreal
Personally have no idea about the limitations, but Waffledaves comments are reassuring, to some extent. Think the likeliest scenario besides really amazing programmable robots/ drone type humanoid bots is cybernetic enhancement of humans.

I was going to mention this but my post was already long enough. I think we will see a lot of ethical dilemmas that will pop up regarding cybernetic enhancements.

For example, imagine we come up with an implant/replacement for our eyes that grant us far superior vision or the ability to zoom or see different spectrums of light. Maybe they have tiny computers with facial recognition software, or provide you with a HUD with enhanced visual information. Imagine a cochlear implant that gives us superior hearing. These aren't that far off. Right now we are developing things like this to help people with vision and hearing problems. But then... if we can make ourselves "better," why don't we?

There are people today with extremely advanced artificial limbs. Limbs that mimic our regular body parts but that don't get tired, don't get damaged or hurt. They don't get cancer or disease. They aren't perfect, but one day they will be. And at that point, why not just replace your arm or leg with one that is "better?"

What if we develop brain implants that enhance our ability to process information?

At a certain point, it will be come normal to enhance yourself cybernetically. And at that point, you will have so many ethical issues that come up that it's really going to change a lot of things for humans. Will cyber-enhanced humans still be human? Will they want to be called human? Are you still "you" if you start replacing yourself with machine parts? What happens when you're more machine than biological person?
 

Walrus26

Wearing a Habs Toque in England.
May 24, 2018
3,155
4,895
Peterborough, UK
I was going to mention this but my post was already long enough. I think we will see a lot of ethical dilemmas that will pop up regarding cybernetic enhancements.

For example, imagine we come up with an implant/replacement for our eyes that grant us far superior vision or the ability to zoom or see different spectrums of light. Maybe they have tiny computers with facial recognition software, or provide you with a HUD with enhanced visual information. Imagine a cochlear implant that gives us superior hearing. These aren't that far off. Right now we are developing things like this to help people with vision and hearing problems. But then... if we can make ourselves "better," why don't we?

There are people today with extremely advanced artificial limbs. Limbs that mimic our regular body parts but that don't get tired, don't get damaged or hurt. They don't get cancer or disease. They aren't perfect, but one day they will be. And at that point, why not just replace your arm or leg with one that is "better?"

What if we develop brain implants that enhance our ability to process information?

At a certain point, it will be come normal to enhance yourself cybernetically. And at that point, you will have so many ethical issues that come up that it's really going to change a lot of things for humans. Will cyber-enhanced humans still be human? Will they want to be called human? Are you still "you" if you start replacing yourself with machine parts? What happens when you're more machine than biological person?

You are the storyboarder from the dev team behind the original (and best by miles) Deus Ex PC game. Spotters fee claimed.
 
  • Like
Reactions: waffledave

yianik

Registered User
Jun 30, 2009
10,667
6,100
I was going to mention this but my post was already long enough. I think we will see a lot of ethical dilemmas that will pop up regarding cybernetic enhancements.

For example, imagine we come up with an implant/replacement for our eyes that grant us far superior vision or the ability to zoom or see different spectrums of light. Maybe they have tiny computers with facial recognition software, or provide you with a HUD with enhanced visual information. Imagine a cochlear implant that gives us superior hearing. These aren't that far off. Right now we are developing things like this to help people with vision and hearing problems. But then... if we can make ourselves "better," why don't we?

There are people today with extremely advanced artificial limbs. Limbs that mimic our regular body parts but that don't get tired, don't get damaged or hurt. They don't get cancer or disease. They aren't perfect, but one day they will be. And at that point, why not just replace your arm or leg with one that is "better?"

What if we develop brain implants that enhance our ability to process information?

At a certain point, it will be come normal to enhance yourself cybernetically. And at that point, you will have so many ethical issues that come up that it's really going to change a lot of things for humans. Will cyber-enhanced humans still be human? Will they want to be called human? Are you still "you" if you start replacing yourself with machine parts? What happens when you're more machine than biological person?

Pandora's box anyone ?

It's actually staggering. Too big for my brain, but the discussions will be on a never before reached scale. The examples you gave.

A welder loses an arm and you replace it. Ok. A pitcher loses an arm and replaces it and can throw a 130 mph fastball . Can he play ? No ? Is that discrimination ? What if he chooses to replace his arm ?

What about the government requiring everyone to have some kind of brain chip inserted to curb violent behaviour, which is turned off if you are in the police, military etc. No doubt this would be called for by many of the public?

It would be foolish of me to go on because the what ifs are endless.

Unfortunately even if the right decisions are made by the biggest of powers, which would surprise me, being humans, there will always be those who will still go ahead.
 
  • Like
Reactions: waffledave

Miller Time

Registered User
Sep 16, 2004
22,966
15,315
That team was not great IMO, it overachieved.

yeah, i know... shame that Molson wasn't bright enough to realize that before he rewarded MB with the huge contract extension despite doing nothing more than piggybacking of off inherited talent and a run of unprecedented luck (habs led the league in fewest games lost to injury through MB/MT's first few seasons).
 
  • Like
Reactions: Pickles

Kriss E

Registered User
May 3, 2007
55,329
20,272
Jeddah
I didn't really think there was anything to address. We simply disagree, I think our developmental staff has been mostly bad, but I don't/can't blame them on Tinordi, he was never going to amount to anything imo.

I don't care if you think maybe he could have been a bottom pairing guy for a few years. It doesn't take away from the point I was making. Even if he was, who gives a **** about that, is it really a failure that it didn't happen. If the margins are that close then we made a bad pick.
I never claimed it to be a good pick, but we still messed up his development.
 

Kriss E

Registered User
May 3, 2007
55,329
20,272
Jeddah
McDavid is already a top-two player in the world, and he's not leading the Oilers to much at all.
He's been here for a couple of seasons? He made the POs already. I'm sure he'll lead them well over next 8 years at some point. They are not going to be a bottom dwelling team for much longer.
 
  • Like
Reactions: Pickles

Grate n Colorful Oz

Hutson Hawk
Jun 12, 2007
35,310
32,163
Hockey Mecca
A computer can add to his knowledge base though.

This is pretty much were AI is going these days. Called depth learning or whatever. It's only the beginning and for now is used for very specific tasks like speech recognition, playing a game or analyzing images but eventually you'll have a computer able to do it all and obviously better than humans. No more strictly programmed answers just a knowledge base with algorithms to make a choice using this knowledge base. And of course the knowledge base can be updated by the AI itself based on his successes and failures. It requires a lot of processing power though. Just for one specific task it requires pretty much a super computer and also a truck load of money.

Deep learning - Wikipedia

Top 10 artificial intelligence (AI) technology trends for 2018

Machine Learning and AI trends for 2018: What to Expect?

AI vs. ML vs. DL | Skymind

There's only a small minority in AI research who actually understand the need to concentrate on consciousness rather than 'intelligence'. No matter the intricacies and complexity of the algorithms, those will remain glorified probability machines and calculators and will remain far from a singularity, far from self-awareness. Calculation is not intelligence. My calculator has me beat by a light year when it comes to calculus, but it has zero intelligence.

The whole problem lies in the fact that we are an emmergent biological species and our faculties, genes and behaviours have evolved slowly in relation with our environment (Jean-Baptiste Lamarck was partly vindicated by the science of epigenetics). Our consciousness has grown by feeling, by our senses, by this intimate relationship with all the cues that come into our minds through our senses.

Consciousness can't sprout into existence and evolve without this relationship. Every single one of our thoughts originates from our limbic system (A. Damasio), the seat of emotions, where all our attraction/aversion cues get stored. It is from sensing and feeling that our intelligence has evolved. The old diehard myth of cold calculation, and psychopaths being geniuses is a complete farce. Sure, childhood trauma of one or all varieties, physical and psychological trauma, or neglect, will indeed create bigger frontal cortices (executive functions) than the average (along with a bigger amygdala; violence & fear), but the highest yield for intelligence comes from maternal care (M.C. Diamond; F. Benes).

Creativity originates out of the use of all four major cortical regions (frontal cortex, temporal lobe, parietal lobe and occipital lobe). The facilitator, the sorta hub for all those regions to interact is the anterior cingulate cortex and the corpus callusom. The latter is the left/right bridge. The former deals with our most basic and central of our faculties; sociality. There is a growing focus on sociality as the key driver to intellectual growth, which fits in like a glove with all the neurobiology research around emotional and intellectual growth. They go hand-in-hand. One can inhibit or drive the other and vice versa (R. Sapolsky), but most of the time, they work in tandem.

Creativity, abstractions and metaphors all seem to stem from all the regions that are central to our sociality, especially language.

One last argument versus the prevailing concensus in AI research, and it's a big one:

In the animal kingdom, especially and almost exclusively among mammalians, the few rare races who display self-awareness (the rouge tests and other similar experiments) are also the ones with the highest empathic response and complexe sociality. Elephants, chimps & bonobos, cetacean and the corvid family.

There is no greater complex sociality than with the most intelligent of mammals, us.

You have to differentiate yourself from others and from your environment to be self-aware. That's why empathy and sociality are key, and why so much research is gonna be wasted with the wrong idea as the starting point.

High levels of problem solving and learning didn't start with tool use. It increased as our social minds increased. Theory of mind is where it all strated.

Without senses and sociality, all we'll have are more complex systems of control/capacities/automation, and the code lines will forever be limited to the boundaries of it's scope, like cobra commander said, but put differently.

I could go on, but i'll stop here, it's getting late.
 
Last edited:

Miller Time

Registered User
Sep 16, 2004
22,966
15,315
There's only a small minority in AI research who actually understand the need to concentrate on consciousness rather than 'intelligence'. No matter the intricacies and complexity of the algorithms, those will remain glorified probability machines and calculators and will remain far from a singularity, far from self-awareness. Calculation is not intelligence. My calculator has me beat by a light year when it comes to calculus, but it has zero intelligence.

The whole problem lies in the fact that we are an emmergent biological species and our faculties, genes and behaviours have evolved slowly in relation with our environment (Jean-Baptiste Lamarck was partly vindicated by the science of epigenetics). Our consciousness has grown by feeling, by our senses, by this intimate relationship with all the cues that come into our minds through our senses.

Consciousness can't sprout into existence and evolve without this relationship. Every single one of our thoughts originates from our limbic system (A. Damasio), the seat of emotions, where all our attraction/aversion cues get stored. It is from sensing and feeling that our intelligence has evolved. The old diehard myth of cold calculation, and psychopaths being geniuses is a complete farce. Sure, childhood trauma of one or all varieties, physical and psychological trauma, or neglect, will indeed create bigger frontal cortices (executive functions) than the average (along with a bigger amygdala; violence & fear), but the highest yield for intelligence comes from maternal care (M.C. Diamond; F. Benes).

Creativity originates out of the use of all four major cortical regions (frontal cortex, temporal lobe, parietal lobe and occipital lobe). The falicitator, the sorta hub for all those regions to interact is the anterior cingulate cortex and the corpus callusom. The latter is the left/right bridge. The former deals with our most basic and central of our faculties; sociality. There is a growing focus on sociality as the key driver to intellectual growth, which fits in like a glove with all the neurobiology research around emotional and intellectual growth. They go hand-in-hand. One can inhibit or drive the other and vice versa (R. Sapolsky), but most of the time, they work in tandem.

Creativity, abstractions and metaphors all seem to stem from all the regions that are central to our sociality, especially language.

One last argument versus the prevailing concensus in AI research, and it's a big one:

In the animal kingdom, especially and almost exclusively among mammalians, the few rare races who display self-awareness (the rouge tests and other similar experiments) are also the ones with the highest empathic response and complexe sociality. Elephants, chimps & bonobos, cetacean and the corvid family.

There is no greater complex sociality than with the most intelligent of mammals, us.

You have to differentiate yourself from others and from your environment to be self-aware. That's why empathy and sociality are key, and why so much research is gonna be wasted with the wrong idea as the starting point.

High levels of problem solving and learning didn't start with tool use. It increased as our social minds increased. Theory of mind is where it all strated.

Without senses and sociality, all we'll have are more complex systems of control/capacities/automation, and the code lines will forever be limited to the boundaries of it's scope, like cobra commander said, but put differently.

I could go on, but i'll stop here, it's getting late.

Unfortunately, as MB has demonstrated over the past 7 years, creativity & intelligence are not requirements when it comes to the destruction or the dismantling of human potential...
 

Lshap

Hardline Moderate
Jun 6, 2011
27,350
25,110
Montreal
Unfortunately, as MB has demonstrated over the past 7 years, creativity & intelligence are not requirements when it comes to the destruction or the dismantling of human potential...
Nicely done. You lassoed a big subject into a neat punchline.
 

theghost1

Registered User
Oct 30, 2017
1,509
571
When not if Montreal gets off to another bad start this year ....what is Bergevin going to say ATTITUDE......it is not bloody attitude it is lack of talent from an inept GM.
 
  • Like
Reactions: Cobra Commander

Bryson

#EugeneMolson
Jun 25, 2008
7,113
4,321
When not if Montreal gets off to another bad start this year ....what is Bergevin going to say ATTITUDE......it is not bloody attitude it is lack of talent from an inept GM.

Nah dude he will bring in the Dalai Lama and blame the failed season because the player's chakras were not aligned.
 

PaulD

Time for a new GM !
Feb 4, 2016
29,159
16,095
Dundas
Not true. I knew about him and I wasn't a "connaisseur" at the time.

A lot of people knew he was the best of the three Russians drafted in the 1st round. He just happened to drop because it was at the height of the KHL threat.
OK. Just don't recall anyone saying the Habs should have taken him instead of Tinordi. Until years later.

He sure flourished in Tbay. Would likely have been a spare part in Montreal and traded by now. :nod:
 

Bryson

#EugeneMolson
Jun 25, 2008
7,113
4,321
I'm thinking some people here have watched the Terminator series of movies a lot. And have thought about them a lot. Personal favourite was Judgement Day. Terminator was also a good watch.

Don't know why we want to venture into trying to create a machine brain to replicate a human one. I mean why ?

Personally have no idea about the limitations, but Waffledaves comments are reassuring, to some extent. Think the likeliest scenario besides really amazing programmable robots/ drone type humanoid bots is cybernetic enhancement of humans.

And when I say really amazing programmable robots / drone bots, I am not saying that is a good thing. Imagine a government having that kind of technology and its military/ civil security application. Yikes.

Terminator is one of most important and prophetic movies in human history. Skynet is not a matter of if, but when... :sarcasm:
 
  • Like
Reactions: Cobra Commander
Status
Not open for further replies.

Ad

Upcoming events

Ad

Ad