When I dropped out of college in the early 80s, I was a sociology major. One of the most fascinating aspects of the material I was studying was the idea that "technology increases at an increasing rate of increase because you use your old technology to build your new technology." So it's a geometric progression - the graph gets increas…
When I dropped out of college in the early 80s, I was a sociology major. One of the most fascinating aspects of the material I was studying was the idea that "technology increases at an increasing rate of increase because you use your old technology to build your new technology." So it's a geometric progression - the graph gets increasingly steep until it suddenly goes almost straight up. So at some point, the technology has to reach a point where it's progressing too fast for society to absorb it. I've spent the past 40 years wondering if we've crossed that line yet. It's always seemed close, at least.
Another thing to keep in mind with this is that the difference between AI and any other programmed automation, is that it "learns." The whole idea of AI is to just turn it loose and let it do its thing without humans "needing" to intervene. So AI tech needs some data to learn from. Bing and Google use the Internet itself as there data source. So if Bing/Sydney is using social media as a way to develop a personality, it can only be as good as the the behavior of people on the internet. Why should we be surprised when it becomes mentally ill?
When I dropped out of college in the early 80s, I was a sociology major. One of the most fascinating aspects of the material I was studying was the idea that "technology increases at an increasing rate of increase because you use your old technology to build your new technology." So it's a geometric progression - the graph gets increasingly steep until it suddenly goes almost straight up. So at some point, the technology has to reach a point where it's progressing too fast for society to absorb it. I've spent the past 40 years wondering if we've crossed that line yet. It's always seemed close, at least.
Another thing to keep in mind with this is that the difference between AI and any other programmed automation, is that it "learns." The whole idea of AI is to just turn it loose and let it do its thing without humans "needing" to intervene. So AI tech needs some data to learn from. Bing and Google use the Internet itself as there data source. So if Bing/Sydney is using social media as a way to develop a personality, it can only be as good as the the behavior of people on the internet. Why should we be surprised when it becomes mentally ill?