Jump to content

Eric Horvitz

From Wikipedia, the free encyclopedia
(Redirected from Eric Joel Horvitz)
Eric Horvitz
Born
Eric Joel Horvitz
NationalityAmerican
EducationPh.D and M.D. from Stanford University
OccupationComputer scientist
EmployerMicrosoft
TitleChief Scientific Officer

Eric Joel Horvitz (/ˈhɔːrvɪts/) is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer.[1] He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Horvitz was elected a member of the National Academy of Engineering in 2013 for computational mechanisms for decision making under uncertainty and with bounded resources.

Biography

[edit]

Horvitz received his Ph.D and M.D. from Stanford University.[2] His doctoral dissertation, Computation and Action Under Bounded Resources, and follow-on research introduced models of bounded rationality founded in probability and decision theory. He did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.

He is currently the Chief Scientific Officer of Microsoft. He has been elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the National Academy of Engineering (NAE), the American Academy of Arts and Sciences, and of the American Association for the Advancement of Science (AAAS).

He was elected as an ACM Fellow in 2014 for "contributions to artificial intelligence, and human-computer interaction."[3]

He was elected to the ACM CHI Academy in 2013 for “research at the intersection of human-computer interaction and artificial intelligence.”[4]

He was elected to the American Philosophical Society in 2018.[5]

In 2015, he was awarded the AAAI Feigenbaum Prize,[6] a biennial award for sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems.

In 2015, he was also awarded the ACM - AAAI Allen Newell Award,[7] for "contributions to artificial intelligence and human-computer interaction spanning the computing and decision sciences through developing principles and models of sensing, reflection, and rational action."

He serves on the President's Council of Advisors on Science and Technology[8] (PCAST), the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2), and the Computer Science and Telecommunications Board[9] (CSTB) of the US National Academies.

He has served as president of the Association for the Advancement of AI (AAAI), on the NSF Computer & Information Science & Engineering (CISE) Advisory Board, on the council of the Computing Community Consortium (CCC), chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science (AAAS), on the Board of Regents[10] of the US National Library of Medicine (NLM), and the U.S. National Security Commission on AI.

Work

[edit]

Horvitz's research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of machine learning and inference, information retrieval, human-computer interaction, bioinformatics, and e-commerce.

Horvitz played a significant role in the use of probability and decision theory in artificial intelligence. His work raised the credibility of artificial intelligence in other areas of computer science and computer engineering, influencing fields ranging from human-computer interaction to operating systems. His research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality, a decision-theoretic approach to bounded rationality.[11] The influences of bounded optimality extend beyond computer science into cognitive science and psychology.[12][13]

He studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems[14] in environments over time. In related work, he applied probability and machine learning to solve combinatorial problems and to guide theorem proving.[15] He introduced the anytime algorithm paradigm in AI,[16][17] where partial results, probabilities, or utilities of outcomes are refined with computation under different availabilities or costs of time, guided by the expected value of computation.[18][19]

He has issued long-term challenge problems for AI[20]—and has espoused a vision of open-world AI,[21] where machine intelligences have the ability to understand and perform well in the larger world where they encounter situations they have not seen before.

He has explored synergies between human and machine intelligence.[22] He introduced principles for using machine learning and decision theory to guide machine versus human initiative,[23] methods that provide AI systems with understandings of when to transfer problem solving to humans,[24] and the use of machine learning and planning techniques to identify and merge the complementary abilities of people and AI systems.[25][22][26] In work on human-centered AI, he introduced measures and models of the expected value of displayed information to guide the display of information to human decision makers in time-critical settings[27] and methods for making statistical AI inferences more understandable.[28] He introduced models of human attention in computing systems,[29][30] and studied the use of machine learning to infer the cost of interruptions to computer users.[31][32] His use of machine learning to build models of human surprise was featured as a technology breakthrough by MIT Technology Review.[33]

He investigated the use of AI methods to provide assistance to users including help with software[34][35] and in the daily life.[36]

He made contributions to multimodal interaction.[37] In 2015, he received the ACM ICMI Sustained Accomplishment Award[38][39] for contributions to multimodal interaction. His work on multimodal interaction includes studies of situated interaction,[40][41] where systems consider physical details of open-world settings and can perform dialog with multiple people.[42]

He co-authored probability-based methods to enhance privacy, including a model of altruistic sharing of data called community sensing[43] and risk-sensitive approaches[44] including stochastic privacy.[45]

He is Microsoft's top inventor.[46]

He led efforts in applying AI methods to computing systems, including machine learning for memory management in Windows,[47] web prefetching,[48] graphics rendering,[49] and web crawling.[50] He did early work on AI for debugging software.[51]

Horvitz speaks on the topic of artificial intelligence, including on NPR and the Charlie Rose show.[52][53][54] Online talks include both technical lectures and presentations for general audiences (TEDx Austin: Making Friends with Artificial Intelligence). His research has been featured in The New York Times and MIT Technology Review.[55][56][57][33] He has testified before the US Senate on progress, opportunities, and challenges with AI.[58]

AI and society

[edit]

He has addressed technical and societal challenges and opportunities with the fielding of AI technologies in the open world,[59] including beneficial uses of AI,[60] AI safety and robustness,[61] and where AI systems and capabilities can have inadvertent effects, pose dangers, or be misused.[59][62][63][64] He has presented on caveats with applications of AI in military settings.[65] He and Thomas G. Dietterich called for work on AI alignment, saying that AI systems "must reason about what people intend rather than carrying out commands literally."[62]

He has called for action on potential risks to civil liberties posed by government uses of data in AI systems.[66] He and privacy scholar Deirdre Mulligan stated that society must balance privacy concerns with benefits of data for social benefit.[67]

He has presented on the risks of AI-enabled deepfakes and contributed to media provenance technologies[68] that cryptographically certify the source and history of edits of digital content.[69]

Asilomar AI study

[edit]

He served as President of the AAAI from 2007–2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public.[70]

In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control.[70] In a later NPR interview, he said that investments in scientific studies of superintelligences would be valuable to guide proactive efforts even if people believed that the probability of losing of control of AI was low because of the cost of such outcomes.[71]

One Hundred Year Study on Artificial Intelligence

[edit]

In 2014, Horvitz defined and funded with his wife the One Hundred Year Study of Artificial Intelligence (AI100) at Stanford University.[72][73] In 2016, the AI Index was launched as a project of the One Hundred Year Study.

According to Horvitz, the AI100 gift, which may increase in the future, is sufficient to fund the study for a century.[73] A Stanford press release stated that sets of committees over a century will "study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play." A framing memo for the study calls out 18 topics for consideration, including law, ethics, the economy, war, and crime.[73][74] Topics include abuses of AI that could pose threats to democracy and freedom and addressing possibilities of superintelligences and loss of control of AI.

The One Hundred Year Study is overseen by a Standing Committee. The Standing Committee formulates questions and themes and organizes a Study Panel every five years. The Study Panel issues a report that assesses the status and rate of progress of AI technologies, challenges, and opportunities with regard to AI's influences on people and society.

The 2015 study panel of the One Hundred Year Study, chaired by Peter Stone, released a report in September 2016, titled "Artificial Intelligence and Life in 2030 Archived 2019-05-30 at the Wayback Machine."[75] The panel advocated for increased public and private spending on the industry, recommended increased AI expertise at all levels of government, and recommended against blanket government regulation.[76][77] Panel chair Peter Stone argues that AI won't automatically replace human workers, but rather, will supplement the workforce and create new jobs in tech maintenance.[76] While mainly focusing on the next 15 years, the report touched on concerns and expectations that had risen in prominence over the last decade about the risks of superintelligent robots, stating "Unlike in the movies, there's no race of superhuman robots on the horizon or probably even possible.[77][78] Stone stated that "it was a conscious decision not to give credence to this in the report."[73]

The report of the second cycle of the AI100 study, chaired by Michael Littman, was published in 2021.[79][80]

Founding of Partnership on AI

[edit]

He co-founded and has served as board chair of the Partnership on AI, a non-profit organization bringing together Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft with representatives from civil society, academia, and non-profit R&D. The organization's website points at initiatives, including studies of risk scores in criminal justice,[81] facial recognition systems,[82] AI and economy,[83] AI safety,[84] AI and media integrity,[85] and documentation of AI systems.[86]

Microsoft Aether Committee

[edit]

He founded and chairs the Aether Committee at Microsoft, Microsoft's internal committee on the responsible development and fielding of AI technologies.[87][88] He reported that the Aether Committee had made recommendations on and guided decisions that have influenced Microsoft's commercial AI efforts.[89][90] In April 2020, Microsoft published content on principles, guidelines, and tools developed by the Aether Committee and its working groups, including teams focused on AI reliability and safety, bias and fairness, intelligibility and explanation, and human-AI collaboration.[91]

Publications

[edit]

Books

[edit]
  • Horvitz, E. (December 1990), Computation and Action Under Bounded Resources (PDF) (Dissertation), Stanford, CA: Stanford University

Selected articles

[edit]

Podcasts

[edit]

References

[edit]
  1. ^ Bonifacic, Igor (11 March 2020). "Microsoft appoints its first-ever chief scientific officer". Engadget. Retrieved 27 March 2020.
  2. ^ "Eric Horvitz". IEEE Xplore Digital Library. Retrieved 3 June 2019.
  3. ^ ERIC HORVITZ ACM Fellows 2014
  4. ^ "2013 SIGCHI Awards – ACM SIGCHI". Retrieved 2022-03-22.
  5. ^ "Election of New Members at the 2018 Spring Meeting | American Philosophical Society".
  6. ^ "The AAAI Feigenbaum Prize". AAAI. Retrieved 14 April 2016.
  7. ^ "ERIC HORVITZ - Award Winner". ACM. Retrieved 3 June 2019.
  8. ^ "Eric Horvitz, MD PhD". The White House. Retrieved 2022-01-05., the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2), and the Computer Science and Telecommunications Board (CSTB) "Membership of the Computer Science and Telecommunications Board". The National Academies of Sciences, Engineering and Medicine. Archived from the original on 3 June 2019. Retrieved 3 June 2019.
  9. ^ "About the Computer Science and Telecommunications Board". National Academies. Retrieved 2022-01-14.
  10. ^ "Board of Regents". U.S. National Library of Medicine. Retrieved 3 June 2019.
  11. ^ Mackworth, Alan (July 2008). "Introduction of Eric Horvitz" (PDF). AAAI Presidential Address.
  12. ^ Gershman, Samuel J.; Horvitz, Eric J.; Tenenbaum, Joshua B. (17 July 2015). "Computational rationality: A converging paradigm for intelligence in brains, minds, and machines". Science. 349 (6245): 273–278. Bibcode:2015Sci...349..273G. doi:10.1126/science.aac6076. PMID 26185246. S2CID 14818619.
  13. ^ Howes, Andrew; Duggan, Geoffrey B.; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L. (1 July 2016). "Predicting Short-term Remembering as Boundedly Optimal Strategy Choice" (PDF). Cognitive Science. 40 (5): 1192–1223. doi:10.1111/cogs.12271. PMID 26294328.
  14. ^ Horvitz, Eric (February 2001), "Principles and Applications of Continual Computation", Artificial Intelligence, 126 (1–2): 159–196, CiteSeerX 10.1.1.476.5653, doi:10.1016/S0004-3702(00)00082-5
  15. ^ Horvitz, Eric J.; Ruan, Y.; Gomes, C.; Kautz, H.; Selman, B.; Chickering, D.M. (July 2001), "A Bayesian Approach to Tackling Hard Computational Problems" (PDF), Proceedings of the Conference on Uncertainty and Artificial Intelligence: 235–244
  16. ^ Horvitz, Eric (July 1987). "Reasoning about beliefs and actions under computational resource constraints" (PDF). UAI'87 Proceedings of the Third Conference on Uncertainty in Artificial Intelligence. Arlington, VA: AUAI Press: 429–447. ISBN 0-444-87417-8.
  17. ^ Horvitz, Eric (August 1988). "Reasoning under varying and uncertain resource constraints" (PDF). AAAI'88 Proceedings of the Seventh AAAI National Conference on Artificial Intelligence. AAAI Press: 111–116.
  18. ^ Horvitz, Eric J.; Cooper, Gregory F.; Heckerman, David E. (August 1989). "Reflection and action under scarce resources: theoretical principles and empirical study" (PDF). IJCAI'89 Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 2. San Francisco, CA: Morgan Kaufmann Publishers Inc.: 1121–1127.
  19. ^ Horvitz, Eric (December 1990). Computation and Action Under Bounded Resources (PDF) (PhD thesis). Stanford University.
  20. ^ Selman, B.; Brooks, R.; Dean, T.; Horvitz, E.; Mitchell, T.; Nilsson, N. (August 1996), "Challenge Problems for Artificial Intelligence", Proceedings of AAAI-96, Thirteenth National Conference on Artificial Intelligence, Portland, Oregon: 1340–1345
  21. ^ Horvitz, Eric (July 2008), "Artificial Intelligence in the Open World", AAAI Presidential Lecture
  22. ^ a b Horvitz, Eric (Jan 13, 2020). "2019 Annual Meeting Plenary: People, Machines, and Intelligence". YouTube. Retrieved 2020-04-02.
  23. ^ Horvitz, Eric (May 1999). "Principles of mixed-initiative user interfaces" (PDF). Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit - CHI '99. New York, NY: ACM. pp. 159–166. CiteSeerX 10.1.1.78.7280. doi:10.1145/302979.303030. ISBN 0-201-48559-1. S2CID 8943607.
  24. ^ Horvitz, Eric; Paek, Tim (2007-03-01). "Complementary computing: policies for transferring callers from dialog systems to human receptionists". User Modeling and User-Adapted Interaction. 17 (1): 159–182. doi:10.1007/s11257-006-9026-1. ISSN 1573-1391. S2CID 2819237.
  25. ^ Kamar, Ece; Hacker, Severin; Horvitz, Eric (8 June 2018). "Combining human and machine intelligence in large-scale crowdsourcing" (PDF). Proceeding, AAMAS '12 Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1. 1. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems: 467–474. ISBN 978-0-9817381-1-6.
  26. ^ Wilder, Bryan; Horvitz, Eric; Kamar, Ece (2020-07-09). "Learning to Complement Humans". Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. Vol. 2. pp. 1526–1533. doi:10.24963/ijcai.2020/212. ISBN 978-0-9992411-6-5. S2CID 218486980.
  27. ^ Horvitz, Eric; Barry, Matthew (August 1995). "Display of information for time-critical decision making" (PDF). Proceeding, UAI'95 Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. San Francisco, CA: Morgan Kaufmann Publishers Inc: 296–305. ISBN 1-55860-385-9.
  28. ^ Horvitz, Eric; Heckerman, David; Nathwani, Bharat; Fagan, Lawrence (October 1986). "The use of a heuristic problem-solving hierarchy to facilitate the explanation of hypothesis-directed reasoning" (PDF). Proceedings of Medinfo: 27–31.
  29. ^ Horvitz, Eric; Kadie, Carl; Peak, Tim; Hovel, David (March 2003). "Models of attention in computing and communication: from principles to applications" (PDF). Communications of the ACM. 46. New York, NY: ACM: 52–59. doi:10.1145/636772.636798. S2CID 2584780.
  30. ^ Markhoff, John (17 July 2000). "Microsoft Sees Software 'Agent' as Way to Avoid Distractions". The New York Times. Retrieved 3 June 2019.
  31. ^ Horvitz, Eric; Apacible, Johnson (2003-11-05). "Learning and reasoning about interruption". Proceedings of the 5th international conference on Multimodal interfaces. ICMI '03. New York, NY, USA: Association for Computing Machinery. pp. 20–27. doi:10.1145/958432.958440. ISBN 978-1-58113-621-0. S2CID 1183716.
  32. ^ Horvitz, Eric; Koch, Paul; Apacible, Johnson (2004-11-06). "BusyBody". Proceedings of the 2004 ACM conference on Computer supported cooperative work. CSCW '04. New York, NY, USA: Association for Computing Machinery. pp. 507–510. doi:10.1145/1031607.1031690. ISBN 978-1-58113-810-8. S2CID 11517148.
  33. ^ a b Waldrop, M. Mitchell (19 February 2008). "TR10: Modeling Surprise". MIT Technology Review. Retrieved 2022-01-11.
  34. ^ Horvitz, E; Breese, J; Heckerman, D; Hovel, D; Rommelse, K (July 1998). "The Lumiere Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users". Proceedings of UAI, Conference on Uncertainty in Artificial Intelligence: 256–265.
  35. ^ "Lumiere - Intelligent User Interface". YouTube. 2009-08-31 [1995]. Retrieved 2020-04-02.
  36. ^ "How the forgetfulness of one of Microsoft's top scientists inspired a killer new feature for Windows 10". Business Insider Australia. 2017-02-09. Retrieved 2022-01-11.
  37. ^ "Information Agents: Directions and Futures (2001)". YouTube. 2020-02-21 [2001]. Retrieved 2020-04-02.
  38. ^ "ICMI Sustained Accomplishment Award". International Conference on Multimodal Interaction. Retrieved 2020-04-02.
  39. ^ Bohus, D; Horvitz, E (November 2009). "Dialog in the Open World: Platform and Applications" (PDF). ACM International Conference on Multimodal Interaction.
  40. ^ Bohus, D; Horvitz, E (2019). "Situated Interaction". The Handbook of Multimodal-Multisensor Interfaces, Volume 3. Association for Computing Machinery and Morgan & Claypool. pp. 105–143. ISBN 978-1-970001-75-4.
  41. ^ "Elevating human-computer interaction to a new level of sophistication". YouTube. 2014-04-08. Retrieved 2020-04-02.
  42. ^ Bohus, D; Horvitz, E (September 2009). "Models for multiparty engagement in open-world dialog" (PDF). Proceedings of the SIGDIAL 2009 Conference on the 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGDIAL '09. pp. 225–234. doi:10.3115/1708376.1708409. ISBN 9781932432640. S2CID 2125746.
  43. ^ Krause, A.; Horvitz, E.; Kansal, A.; Zhao, F. (April 2008), "Toward Community Sensing", Proceedings of IPSN 2008
  44. ^ Krause, Andreas; Horvitz, Eric (November 2010). "A Utility-Theoretic Approach to Privacy in Online Services" (PDF). Journal of Artificial Intelligence Research. 39: 633–662. doi:10.1613/jair.3089. S2CID 1582192.
  45. ^ Singla, A.; Horvitz, E.; Kamar, E.; White, R.W. (July 2014), "Stochastic Privacy" (PDF), AAAI, 28, arXiv:1404.5454, Bibcode:2014arXiv1404.5454S, doi:10.1609/aaai.v28i1.8734, S2CID 14584347
  46. ^ "Swimming in Creative Waters: The Art of Invention". Microsoft Research. 2019-05-28. Retrieved 2022-01-13.
  47. ^ Keizer, Gregg (2007-01-19). "Microsoft Predicts The Future With Vista's SuperFetch". InformationWeek. Retrieved 2022-01-13.
  48. ^ Horvitz, Eric (1998-11-01). "Continual computation policies for utility-directed prefetching". Proceedings of the seventh international conference on Information and knowledge management. CIKM '98. New York, NY, USA: Association for Computing Machinery. pp. 175–184. doi:10.1145/288627.288655. ISBN 978-1-58113-061-4. S2CID 13518550.
  49. ^ Horvitz, Eric J.; Lengyel, Jed (2013-02-06). "Perception, Attention, and Resources: A Decision-Theoretic Approach to Graphics Rendering". arXiv:1302.1547 [cs.AI].
  50. ^ Kolobov, Andrey; Peres, Yuval; Lubetzky, Eyal; Horvitz, Eric (2019-07-18). "Optimal Freshness Crawl Under Politeness Constraints". Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR'19. New York, NY, USA: Association for Computing Machinery. pp. 495–504. doi:10.1145/3331184.3331241. ISBN 978-1-4503-6172-9. S2CID 196203203.
  51. ^ Burnell, Lisa; Horvitz, Eric (1995-03-01). "Structure and chance: melding logic and probability for software debugging". Communications of the ACM. 38 (3): 31–ff. doi:10.1145/203330.203338. ISSN 0001-0782. S2CID 9644536.
  52. ^ Hansen, Liane (21 March 2009). "Meet Laura, Your Virtual Personal Assistant". NPR. Retrieved 16 March 2011.
  53. ^ Kaste, Martin (11 Jan 2011). "The Singularity: Humanity's Last Invention?". NPR. Retrieved 14 Feb 2011.
  54. ^ Rose, Charlie. "A panel discussion about Artificial Intelligence". Archived from the original on 2011-02-13. Retrieved 2011-03-12.
  55. ^ Markoff, John (10 April 2008). "Microsoft Introduces Tool for Avoiding Traffic Jams". The New York Times. Retrieved 16 March 2011.
  56. ^ Markoff, John (17 July 2000). "Microsoft Sees Software 'Agent' as Way to Avoid Distractions". The New York Times. Retrieved 16 March 2011.
  57. ^ Lohr, Steve, and Markoff, John (24 June 2010). "Smarter Than You Think: Computers Learn to Listen, and Some Talk Back". The New York Times. Retrieved 12 March 2011.{{cite news}}: CS1 maint: multiple names: authors list (link)
  58. ^ Horvitz, Eric (30 November 2016). "Reflections on the Status and Future of Artificial Intelligence" (PDF). erichorvitz.com. Retrieved 3 June 2019.
  59. ^ a b Horvitz, Eric (7 July 2017). "AI, people, and society". Science. 357 (6346): 7. Bibcode:2017Sci...357....7H. doi:10.1126/science.aao2466. PMID 28684472.
  60. ^ "AI for Social Good (2016): Keynote AI in Support of People and Society". YouTube. 2016-06-13. Retrieved 2020-04-20.
  61. ^ Horvitz, Eric (2016-06-27). "Reflections on Safety and Artificial Intelligence" (PDF). Eric Horvitz. Retrieved 2020-04-20.
  62. ^ a b Dietterich, Thomas G; Horvitz, Eric J. (October 2015). "Rise of Concerns about AI: Reflections and Directions" (PDF). Communications of the ACM. 58 (10): 38–40. doi:10.1145/2770869.
  63. ^ "Conference on Ethics & AI: Keynote Session". YouTube. 2018-04-09. Retrieved 2020-04-20.
  64. ^ Horvitz, Eric (2017-03-28). "The Long View: AI Directions, Challenges, and Futures". YouTube. Retrieved 2020-04-20.
  65. ^ "Keynote Address, Eric Horvitz: AI Advances, Aspirations—and Concerns". YouTube. 2019-11-15. Retrieved 2020-04-20.
  66. ^ Horvitz, Eric; Clyburn, Mignon; Felten, Ed; LeBlanc, Travis (2021-05-17). "Caution ahead: Navigating risks to freedoms posed by AI". TheHill. Retrieved 2022-01-19.
  67. ^ Horvitz, Eric; Mulligan, Deirdre (17 July 2015). "Data, privacy, and the greater good" (PDF). Science. Vol. 349, no. 6245. pp. 253–254. Retrieved 19 Jan 2022.
  68. ^ England, Paul; Malvar, Henrique S.; Horvitz, Eric; Stokes, Jack W.; Fournet, Cédric; Burke-Aguero, Rebecca; Chamayou, Amaury; Clebsch, Sylvan; Costa, Manuel (2021-07-15), "AMP: Authentication of media via provenance", Proceedings of the 12th ACM Multimedia Systems Conference, New York, NY, USA: Association for Computing Machinery, pp. 108–121, doi:10.1145/3458305.3459599, ISBN 978-1-4503-8434-6, S2CID 210859168
  69. ^ Horvitz, Eric (2021-02-22). "A promising step forward on disinformation". Microsoft On the Issues. Retrieved 2022-01-19.
  70. ^ a b Markoff, John (26 July 2009). "Scientists Worry Machines May Outsmart Man". York Times.
  71. ^ Siegel, Robert (11 January 2011). "The Singularity: Humanity's Last Invention?". NPR.
  72. ^ You, Jia (9 January 2015). "A 100-year study of artificial intelligence? Microsoft Research's Eric Horvitz explains". Science.
  73. ^ a b c d Markoff, John (15 December 2014). "Study to Examine Effects of Artificial Intelligence". The New York Times. Retrieved 1 October 2016.
  74. ^ "One-Hundred Year Study of Artificial Intelligence: Reflections and Framing". Eric Horvitz. 2014. Retrieved 1 October 2016.
  75. ^ "Report: Artificial intelligence to transform urban cities". Houston Chronicle. 1 September 2016. Retrieved 1 October 2016.
  76. ^ a b Dussault, Joseph (4 September 2016). "AI in the real world: Tech leaders consider practical issues". The Christian Science Monitor. Retrieved 1 October 2016.
  77. ^ a b Peter Stone et al. "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed: October 1, 2016.
  78. ^ Knight, Will (1 September 2016). "Artificial intelligence wants to be your bro, not your foe". MIT Technology Review. Retrieved 1 October 2016.
  79. ^ Stacey, Kevin (16 Sep 2021). "New Report Assesses Progress and Risks of Artificial Intelligence". Stanford University HAI News and Announcements. Retrieved 2022-01-20.
  80. ^ McKendrick, Joe (18 Sep 2021). "Artificial intelligence success is tied to ability to augment, not just automate". ZDNet. Retrieved 2022-01-20.
  81. ^ "Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System". The Partnership on AI. 2019-04-23. Retrieved 2020-04-24.
  82. ^ "Bringing Facial Recognition Systems To Light". The Partnership on AI. 2020-02-18. Retrieved 2020-04-24.
  83. ^ "AI, Labor, and the Economy Case Study Compendium". The Partnership on AI. 2019-04-30. Retrieved 2020-04-24.
  84. ^ "Introducing SafeLife: Safety Benchmarks for Reinforcement Learning". The Partnership on AI. 2019-12-04. Retrieved 2020-04-24.
  85. ^ "AI and Media Integrity Steering Committee". The Partnership on AI. 2019-12-05. Retrieved 2020-04-24.
  86. ^ "About ML". The Partnership on AI. Retrieved 2020-04-24.
  87. ^ Nadella, Satya (2018-03-29). "Satya Nadella email to employees: Embracing our future: Intelligent Cloud and Intelligent Edge". Microsoft Stories. Retrieved 3 June 2019.
  88. ^ "Microsoft #TechTalk: AI and Ethics". YouTube. 2019-07-17. Retrieved 2020-04-15.
  89. ^ Boyle, Alan (9 April 2018). "Microsoft is turning down some sales over AI ethics, top researcher Eric Horvitz says". GeekWire. Retrieved 3 June 2019.
  90. ^ "Conference on Ethics & AI: Keynote Session". YouTube. Carnegie Mellon University. 9 April 2018. Retrieved 3 June 2019.
  91. ^ "Responsible AI principles from Microsoft". Microsoft Artificial Intelligence. Retrieved 2020-04-15.
[edit]