Thank you Nadia. I am an executive coach who works with individuals and teams in organizations around the principles of Transformational Leadership. Much of my work is helping influencers in organizations face the need for knowing, understanding, practicing, and applying a healthy Emotional (EQ) and Social Intelligence (SQ). Yes, they are very smart people and most of them want to serve the greater good. Yet there has been very little to no training in EQ and SQ. To say it another way, they know how to live with their HEAD (intelligence and gifts) - being rewarded for it with good grades and higher salaries. Yet, most have had little to no experience in their professional and personal lives living from their HEART. It is the HEART which is the epicenter of any healthy behavior.
To you and all the trusted advisors and influencers in this area, please...continue to work on your own heart (me included) and keep talking, writing, coaching, mentoring, etc. about this!
I’m a retired biomedical engineer who focused primarily on medical technology risk management. Your concerns are valid, and I could add many to and/or derivable from them.
AI is not immune from the consequential failures and accidents associable with any technology. I’ve seen medical device manufacturers who are as ethical as they come and comply with every regulation make consequenrial design mistakes . And then there are companies like Boeing who follow the shareholder value gospel of Jack Welch above all else.
AI is a tool with great promise.There is no doubt a out that.
The 4th factor of social media disintegration could be that it has created an epidemic of the Dunning-Kruger effect--the bias in which people of limited competence in a particular area overestimate their abilities. It might have started as people's honest desire to be more informed or smarter, but we are just not very good at discerning who are the experts on social media or how to evaluate the soundness, authenticity, etc., of the information we see there. We think we're being educated but instead are fed a junk diet of ideas and from there go down rabbit holes of confirmation bias.
The economist, Thomas Picketty, wrote a book about capitalism in which he described the problem with uber-wealth - that it grows to the point that it takes on a life of its own, and the massive pile of wealth directs the trust managers instead of the trust managers managing the wealth. What exists to keep that from happening with AI? What prevents AI and the 'net from managing us instead of us managing AI and the 'net? Or has that already happened and we're unaware of it?
I will listen to the whole panel discussion on Hope. Just hearing a small bit of the moderator's opening remarks was enough to whet my appetite for more.
Mainly, I'm grateful that you posted this because I've been in an emotional pit since hearing about the SCOTUS ruling on Monday morning, so this post gives me something to focus on.
One anecdote before I head back to my job: On Friday, June 7th, I received a call from my husband. He told me that he'd parked the car (a 2004 Honda) at a quick-shop type of place; while he was inside, our car caught on fire. No one was hurt, but I kept fighting back the scary "what if" scenarios that were screaming for attention as I went to get him and our dog (a 115 lb. Lab).
After we were home, I texted my boss that I needed some time to wind down, then we all made a puppy pile on our big ol' king-size bed.
Here's the point: As soon as we were home, our dog was pretty much acting like nothing unusual had happened. Dogs are pretty much in the moment. (They will remember repeated pain/abuse, but a one-off trauma doesn't act as a trigger for fear-based behavior.) A few hours later, I was back at my desk and working, but I was basically ok, too. I was still a tad shaken up, but I was able to work because I forced myself to focus on everything to be grateful for in that whole event: No one was hurt, someone noticed the fire and called 911, someone noticed our dog in the back seat (the fire was in the front seat) and worked to get him out, workers at the shop grabbed fire extinguishers immediately and sprayed down the car, the fire station was only a few blocks away, etc, etc.
Honestly, I have a solid, recent experience of how effective it is to practice gratitude in order to get myself put of my scary head.
I just haven't been able to do it over the past few weeks.
So, thank you for this because I'm sure this is my first step back to some kind of serenity.
I like the re-frame you offer here. You could have focused just on the fear and the what-ifs. But instead you let your dog lead you. And you allowed fear and what if to give way to gratitude and present moment living. Thanks for sharing it!!
I work in big tech and the current projects I’m using AI for are making all bank applications accessible for the hearing and vision impaired — because those folks need checking accounts too, and often can’t complete applications without them being read to them by a third party. The future is bright if we steward it correctly ✨
Several months ago, I put in my search engine a request about a not very well-known historical figure. Nothing came up, but something, I think it was copilot, asked me for info on this person. So, to test it, I gave all sorts of lies about the person. Then I went back and asked the search engine for information about the person. And the computer spouted back all the nonsense I put in. I quickly erased the inaccurate info and checked the search engine to make sure it was corrected. It was. This is dangerous stuff, my friends.
Thank you for asking the real people questions. I believe the tech bros really were out to help humanity in the beginning, then, when the dollars started flowing and "use cases" were being developed every second, helping humanity became the lowest item on the list.
Technology oriented people always think that the latest tech will save the world. Always. (If they profit from the tech, they are evangelical about it.) Obviously AI has previously-unimagined helpful uses--see comment below about using AI to assist the vision and hearing impaired; and what would we do without those medical scans? And yet..the Chinese invented gunpowder to create celebratory displays, and now we have mass shootings. Reach back to 1901 and the first radio broadcast. I pick that intentionally because it seems so out of date. Radio was going to save the world. In fact, Hitler saw the possibilities for radio and used that new tech to plan and execute his strategies during World War II. It would be naive, to say the least, to think that AI will be any different. Yes, people. And the cry of the NRA--"Guns don't kill people; people kill people." True--and a massive half (quarter? tenth?) true. Systems that people create have a life of their own. At that point, it's almost impossible for "people" to undo them. So since tech, late stage capitalism and the rest are here, we can use them to moderate them in whatever way we can. To quote the late Kurt Vonnegut, "So it goes."
I think about this side of story, I love what AI can bring to us in the next future and honestly I can't wait.
The emotional intelligence should be at the table and because I think we are on that side of the table God will give us the right attitude to bring love, empathy and feelings.
AI can learn and hopefully a sophisticated artificial intelligence can learn from us empathics too.
I spent the 2010's serving the VERY remote community of Les Anglais, Haiti with quarterly medical clinics. Our organization was one of less than a handful of "blan" organizations established there when hurricane Matthieu tore through in October 2016. Afterwards, the local minister with whom we had partnered and trusted for years became a key point person for aid organizations. About 10 days after the hurricane, he looked at me and said he was grateful for our organization because it made his name "gwo gwo anpil," (very very BIG/important). Absolute power corrupts absolutely. Our relationship with him never recovered. I share your love of our messy human selves and qualms for the near future.
I usually read. Today I listened. And then I listened again. In truth my brain is overcome with a plethora of thoughts about organic connection, power, technology, isolation, disconntion, social media, AI, nostalgia, hope, anger, and more. Lots to sort through. Thanks for sharing your experience.
I know very little, almost nothing, about AI. But years ago Walt Kelly drew a famous Pogo strip where Pogo said, "We have met the enemy and he is us." I feel like I've pretty well given up on politics to make a better world for us. This might be a copout but more and more I'm feeling like local churches (and maybe "church-like" gatherings) are where my hope lies. I know churches can be full of problematic people. But I feel like it's local gatherings of people who commit to growing in love and service toward each other and their community that are going to be the difference makers. It's like your breaker panel metaphor. I don't think we're really suited to many huge solutions, although I don't discredit big solutions that have been effective. I think what we can really do is not necessarily love all people-kind but the people we most come in contact with. That's hard enough. I don't trust us to handle AI well, but I don't know what I can do to preempt it's incursion into our world. What I do think is POSSIBLE is for me to get better and better at loving my neighbor.
Nadia - I am also curious if any of the presenters talked about how much energy that AI is using? The last (relatively unbiased article I read - which I think was in the Washington Post) said that AI uses more computing power (read electrical energy) than bitcoin server farms, and that those within the industry are hoping for an 'energy miracle' in the form of nuclear fusion. It seems somehow incongruous to me that the generation who is adamantly pro-environment seems radically unaware of the environmental impacts of the tools that they are a huge part of creating. Sure, AI generates cool pictures (as long as you don't count fingers too closely) but it seems like it is the "next shiny toy" coming out of the box. I used to work in academic research and I absolutely believe that AI can help us with new drug discoveries, figure out the most effective way to treat diseases, etc., but should we put the brakes on its broad general use when it is environmentally unsustainable? Of course, that answer is likely already answered as several other commenters (and you yourself) mentioned shareholder value. So, we will burn up the planet even faster while making nice shiny toys and cool pics. I do hope that we do find apps to help differentially enabled folks, find new drugs etc., but I hope we don't create killer robots (article yesterday in the NYTimes).
Thanks for bringing this up, Jim. Eric is the one who educated me on this issue - one that I had never learned about much less considered until he did. In the session I attended, yes, it was mentioned that the immediate regulation of nuclear science led to the near total suppression of innovation in the private sector. So, the presenter claimed, it was never allowed to be used for energy in the way it could have. What someone else brought up is the actual hardware that is used now for servers will be obsolete when AI is more dominant - I guess it uses different "stuff" so the result will be a great deal of electronic waste.
Thank you Nadia. I am an executive coach who works with individuals and teams in organizations around the principles of Transformational Leadership. Much of my work is helping influencers in organizations face the need for knowing, understanding, practicing, and applying a healthy Emotional (EQ) and Social Intelligence (SQ). Yes, they are very smart people and most of them want to serve the greater good. Yet there has been very little to no training in EQ and SQ. To say it another way, they know how to live with their HEAD (intelligence and gifts) - being rewarded for it with good grades and higher salaries. Yet, most have had little to no experience in their professional and personal lives living from their HEART. It is the HEART which is the epicenter of any healthy behavior.
To you and all the trusted advisors and influencers in this area, please...continue to work on your own heart (me included) and keep talking, writing, coaching, mentoring, etc. about this!
I’m a retired biomedical engineer who focused primarily on medical technology risk management. Your concerns are valid, and I could add many to and/or derivable from them.
AI is not immune from the consequential failures and accidents associable with any technology. I’ve seen medical device manufacturers who are as ethical as they come and comply with every regulation make consequenrial design mistakes . And then there are companies like Boeing who follow the shareholder value gospel of Jack Welch above all else.
AI is a tool with great promise.There is no doubt a out that.
As was gunpowder.
and money.
The 4th factor of social media disintegration could be that it has created an epidemic of the Dunning-Kruger effect--the bias in which people of limited competence in a particular area overestimate their abilities. It might have started as people's honest desire to be more informed or smarter, but we are just not very good at discerning who are the experts on social media or how to evaluate the soundness, authenticity, etc., of the information we see there. We think we're being educated but instead are fed a junk diet of ideas and from there go down rabbit holes of confirmation bias.
True!
But the aforementioned “bias in which people of limited competence in a particular area overestimate their abilities” is also a bias.
The economist, Thomas Picketty, wrote a book about capitalism in which he described the problem with uber-wealth - that it grows to the point that it takes on a life of its own, and the massive pile of wealth directs the trust managers instead of the trust managers managing the wealth. What exists to keep that from happening with AI? What prevents AI and the 'net from managing us instead of us managing AI and the 'net? Or has that already happened and we're unaware of it?
I will listen to the whole panel discussion on Hope. Just hearing a small bit of the moderator's opening remarks was enough to whet my appetite for more.
Mainly, I'm grateful that you posted this because I've been in an emotional pit since hearing about the SCOTUS ruling on Monday morning, so this post gives me something to focus on.
One anecdote before I head back to my job: On Friday, June 7th, I received a call from my husband. He told me that he'd parked the car (a 2004 Honda) at a quick-shop type of place; while he was inside, our car caught on fire. No one was hurt, but I kept fighting back the scary "what if" scenarios that were screaming for attention as I went to get him and our dog (a 115 lb. Lab).
After we were home, I texted my boss that I needed some time to wind down, then we all made a puppy pile on our big ol' king-size bed.
Here's the point: As soon as we were home, our dog was pretty much acting like nothing unusual had happened. Dogs are pretty much in the moment. (They will remember repeated pain/abuse, but a one-off trauma doesn't act as a trigger for fear-based behavior.) A few hours later, I was back at my desk and working, but I was basically ok, too. I was still a tad shaken up, but I was able to work because I forced myself to focus on everything to be grateful for in that whole event: No one was hurt, someone noticed the fire and called 911, someone noticed our dog in the back seat (the fire was in the front seat) and worked to get him out, workers at the shop grabbed fire extinguishers immediately and sprayed down the car, the fire station was only a few blocks away, etc, etc.
Honestly, I have a solid, recent experience of how effective it is to practice gratitude in order to get myself put of my scary head.
I just haven't been able to do it over the past few weeks.
So, thank you for this because I'm sure this is my first step back to some kind of serenity.
I like the re-frame you offer here. You could have focused just on the fear and the what-ifs. But instead you let your dog lead you. And you allowed fear and what if to give way to gratitude and present moment living. Thanks for sharing it!!
Dogs = God with fur, 4 feet, and a tail. ❤️🐾
yes.
I work in big tech and the current projects I’m using AI for are making all bank applications accessible for the hearing and vision impaired — because those folks need checking accounts too, and often can’t complete applications without them being read to them by a third party. The future is bright if we steward it correctly ✨
It’s physically impossible for me to be more in agreement with your words on this subject.
Several months ago, I put in my search engine a request about a not very well-known historical figure. Nothing came up, but something, I think it was copilot, asked me for info on this person. So, to test it, I gave all sorts of lies about the person. Then I went back and asked the search engine for information about the person. And the computer spouted back all the nonsense I put in. I quickly erased the inaccurate info and checked the search engine to make sure it was corrected. It was. This is dangerous stuff, my friends.
Like Fox News intentionally does all the talking me.
Thank you for asking the real people questions. I believe the tech bros really were out to help humanity in the beginning, then, when the dollars started flowing and "use cases" were being developed every second, helping humanity became the lowest item on the list.
Technology oriented people always think that the latest tech will save the world. Always. (If they profit from the tech, they are evangelical about it.) Obviously AI has previously-unimagined helpful uses--see comment below about using AI to assist the vision and hearing impaired; and what would we do without those medical scans? And yet..the Chinese invented gunpowder to create celebratory displays, and now we have mass shootings. Reach back to 1901 and the first radio broadcast. I pick that intentionally because it seems so out of date. Radio was going to save the world. In fact, Hitler saw the possibilities for radio and used that new tech to plan and execute his strategies during World War II. It would be naive, to say the least, to think that AI will be any different. Yes, people. And the cry of the NRA--"Guns don't kill people; people kill people." True--and a massive half (quarter? tenth?) true. Systems that people create have a life of their own. At that point, it's almost impossible for "people" to undo them. So since tech, late stage capitalism and the rest are here, we can use them to moderate them in whatever way we can. To quote the late Kurt Vonnegut, "So it goes."
I think about this side of story, I love what AI can bring to us in the next future and honestly I can't wait.
The emotional intelligence should be at the table and because I think we are on that side of the table God will give us the right attitude to bring love, empathy and feelings.
AI can learn and hopefully a sophisticated artificial intelligence can learn from us empathics too.
Thank you Nadia ❤️ GBU
I spent the 2010's serving the VERY remote community of Les Anglais, Haiti with quarterly medical clinics. Our organization was one of less than a handful of "blan" organizations established there when hurricane Matthieu tore through in October 2016. Afterwards, the local minister with whom we had partnered and trusted for years became a key point person for aid organizations. About 10 days after the hurricane, he looked at me and said he was grateful for our organization because it made his name "gwo gwo anpil," (very very BIG/important). Absolute power corrupts absolutely. Our relationship with him never recovered. I share your love of our messy human selves and qualms for the near future.
Thank you for asking the necessary questions. You nailed this.
I usually read. Today I listened. And then I listened again. In truth my brain is overcome with a plethora of thoughts about organic connection, power, technology, isolation, disconntion, social media, AI, nostalgia, hope, anger, and more. Lots to sort through. Thanks for sharing your experience.
I know very little, almost nothing, about AI. But years ago Walt Kelly drew a famous Pogo strip where Pogo said, "We have met the enemy and he is us." I feel like I've pretty well given up on politics to make a better world for us. This might be a copout but more and more I'm feeling like local churches (and maybe "church-like" gatherings) are where my hope lies. I know churches can be full of problematic people. But I feel like it's local gatherings of people who commit to growing in love and service toward each other and their community that are going to be the difference makers. It's like your breaker panel metaphor. I don't think we're really suited to many huge solutions, although I don't discredit big solutions that have been effective. I think what we can really do is not necessarily love all people-kind but the people we most come in contact with. That's hard enough. I don't trust us to handle AI well, but I don't know what I can do to preempt it's incursion into our world. What I do think is POSSIBLE is for me to get better and better at loving my neighbor.
Excellent point. Thank you for this.
Nadia - I am also curious if any of the presenters talked about how much energy that AI is using? The last (relatively unbiased article I read - which I think was in the Washington Post) said that AI uses more computing power (read electrical energy) than bitcoin server farms, and that those within the industry are hoping for an 'energy miracle' in the form of nuclear fusion. It seems somehow incongruous to me that the generation who is adamantly pro-environment seems radically unaware of the environmental impacts of the tools that they are a huge part of creating. Sure, AI generates cool pictures (as long as you don't count fingers too closely) but it seems like it is the "next shiny toy" coming out of the box. I used to work in academic research and I absolutely believe that AI can help us with new drug discoveries, figure out the most effective way to treat diseases, etc., but should we put the brakes on its broad general use when it is environmentally unsustainable? Of course, that answer is likely already answered as several other commenters (and you yourself) mentioned shareholder value. So, we will burn up the planet even faster while making nice shiny toys and cool pics. I do hope that we do find apps to help differentially enabled folks, find new drugs etc., but I hope we don't create killer robots (article yesterday in the NYTimes).
Thanks for bringing this up, Jim. Eric is the one who educated me on this issue - one that I had never learned about much less considered until he did. In the session I attended, yes, it was mentioned that the immediate regulation of nuclear science led to the near total suppression of innovation in the private sector. So, the presenter claimed, it was never allowed to be used for energy in the way it could have. What someone else brought up is the actual hardware that is used now for servers will be obsolete when AI is more dominant - I guess it uses different "stuff" so the result will be a great deal of electronic waste.
Great and often ignored point!