The Plexus para
May. 7th, 2003 01:12 pmSo here's what finally emerged...
Current interests focus on the impact of e-health technologies, especially embedded knowledge instruments, on the health care system. Key questions range from how does patient empowerment change medical practice to how does one market drugs to a physician operating in an electronic environment. On a broader philosophical level, I am interested in the social limits to artificial intelligence-based computing, which seem to me more important than traditional thinking about the limitations of Turing machines. (So named for Alan Turing, the brilliant mathematician considered the founder of modern digital computing.). For instance, using an AI based approach, you could have a system that takes a detailed medical history of a patient, matches it to a huge data base of treatments and outcomes , and recommends to the treating physician the treatment suggested by the evidence. But suppose what it tells is counter- intuitive? How do we react when science, as processed by a powerful machine, tells us something that doesn’t feel right? Are we, as individuals, and as a society, capable of coping with that? Artificial intelligence doesn’t deal with that. It’s a big question. If people are making decisions on how to invest in some bizarre derivatives market, that’s one thing. If the question is do we operate or give chemotherapy, it gets a bit more visceral.
Current interests focus on the impact of e-health technologies, especially embedded knowledge instruments, on the health care system. Key questions range from how does patient empowerment change medical practice to how does one market drugs to a physician operating in an electronic environment. On a broader philosophical level, I am interested in the social limits to artificial intelligence-based computing, which seem to me more important than traditional thinking about the limitations of Turing machines. (So named for Alan Turing, the brilliant mathematician considered the founder of modern digital computing.). For instance, using an AI based approach, you could have a system that takes a detailed medical history of a patient, matches it to a huge data base of treatments and outcomes , and recommends to the treating physician the treatment suggested by the evidence. But suppose what it tells is counter- intuitive? How do we react when science, as processed by a powerful machine, tells us something that doesn’t feel right? Are we, as individuals, and as a society, capable of coping with that? Artificial intelligence doesn’t deal with that. It’s a big question. If people are making decisions on how to invest in some bizarre derivatives market, that’s one thing. If the question is do we operate or give chemotherapy, it gets a bit more visceral.