Proszę używać tego identyfikatora do cytowań lub wstaw link do tej pozycji:
http://hdl.handle.net/11320/11690
Pełny rekord metadanych
Pole DC | Wartość | Język |
---|---|---|
dc.contributor.author | Rejmaniak, Rafał | - |
dc.date.accessioned | 2021-10-11T05:58:49Z | - |
dc.date.available | 2021-10-11T05:58:49Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Białostockie Studia Prawnicze, Vol. 26 nr 3, 2021, s. 25-42 | pl |
dc.identifier.issn | 1689-7404 | - |
dc.identifier.uri | http://hdl.handle.net/11320/11690 | - |
dc.description.abstract | Artificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities. | pl |
dc.language.iso | en | pl |
dc.publisher | Wydział Prawa Uniwersytetu w Białymstoku, Temida 2 | pl |
dc.rights | Uznanie autorstwa-Użycie niekomercyjne-Bez utworów zależnych 3.0 Unported | * |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/3.0/deed.pl | * |
dc.subject | AI discrimination | pl |
dc.subject | AI fairness | pl |
dc.subject | algorithmic bias | pl |
dc.subject | artificial intelligence | pl |
dc.title | Bias in Artificial Intelligence Systems | pl |
dc.type | Article | pl |
dc.rights.holder | Uznanie autorstwa-Użycie niekomercyjne-Bez utworów zależnych 3.0 Unported | pl |
dc.identifier.doi | 10.15290/bsp.2021.26.03.02 | - |
dc.description.Email | r.rejmaniak@uwb.edu.pl | pl |
dc.description.Biographicalnote | Rafał Rejmaniak is Assistant Professor in the Department of Historical and Legal Sciences, Theory and Philosophy of Law, and Comparative Law at the Faculty of Law, University of Białystok, Poland. | - |
dc.description.Affiliation | University of Białystok, Poland | pl |
dc.description.references | Angwin J., Larson J., Mattu S. and Kirchner L., Machine Bias, ProPublica 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. | pl |
dc.description.references | Barfield W. and Pagallo U., Advanced Introduction to Law and Artificial Intelligence, Cheltenham/Northampton 2020. | pl |
dc.description.references | Barocas S. and Selbst A.D., Big Data’s disparate impact, “California Law Review” 2016, vol. 104, no. 2. | pl |
dc.description.references | Berendt B., Preibusch S., Toward accountable discrimination-aware data mining: The importance of keeping human in the loop – and under the looking-glass, “Big Data” 2017, vol. 5, no. 2. | pl |
dc.description.references | Boden M.A., Sztuczna inteligencja. Jej natura i przyszłość, trans. T. Sieczkowski, Łódź 2020. | pl |
dc.description.references | Borysiak W. and Bosek L., Komentarz do art. 32, (in:) M. Safjan and L. Bosek (eds.), Konstytucja RP. Tom I. Komentarz do art. 1–86, Warsaw 2016. | pl |
dc.description.references | Brennan T., Dieterich W. and Ehret B., Evaluating the predictive validity of the COMPAS risk and needs assessment system, “Criminal Justice and Behavior” 2009, vol. 36, no. 1. | pl |
dc.description.references | Cataleta M.S. and Cataleta A., Artificial Intelligence and Human Rights, an Unequal Struggle, “CIFILE Journal of International Law” 2020, vol. 1, no. 2. | pl |
dc.description.references | Coeckelbergh M., AI Ethics, Cambridge/London 2020. | pl |
dc.description.references | Cummings M.L., Automation and Accountability in Decision Support System Interface Design, “The Journal of Technology Studies” 2006, vol. 32, no. 1. | pl |
dc.description.references | Danks D. and London A.J., Algorithmic Bias in Autonomous Systems, ‘Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)’, https://www.cmu.edu/ dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf. | pl |
dc.description.references | Davenport T. and Kalakota R., The potential for artificial intelligence in healthcare, “Future Healthcare Journal” 2019, vol. 6, no. 2. | pl |
dc.description.references | Dymitruk M., Sztuczna inteligencja w wymiarze sprawiedliwości? (in:) L. Lai and M. Świerczyński (eds.), Prawo sztucznej inteligencji, Warsaw 2020. | pl |
dc.description.references | European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). | pl |
dc.description.references | Fjeld J., Achten N., Hilligoss H., Nagy A. and Srikumar M., Principled Artifi cial Intelligence. Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Cambridge 2020. | pl |
dc.description.references | Flasiński M., Wstęp do sztucznej inteligencji, Warsaw 2020. | pl |
dc.description.references | Fry H., Hello world. Jak być człowiekiem w epoce maszyn, trans. S. Musielak, Krakow 2019. | pl |
dc.description.references | German S., Bienstock E. and Doursat R., Neural networks and bias/variance dilemma, “Neural Computation” 1992, vol. 4, no. 1. | pl |
dc.description.references | Hacker P., Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law, “Common Market Law Review” 2018, vol. 55. | pl |
dc.description.references | High-Level Expert Group on Artificial Intelligence (appointed by the European Commission in June 2018), A Defi nition of Artificial Intelligence: Main Capabilities and Scientifi c Disciplines, Brussels 2019. | pl |
dc.description.references | High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, Brussels 2019. | pl |
dc.description.references | Jernigan C. and Mistree B.F., Gaydar: Facebook friendships expose sexual orientation, “First Monday” 2009, vol. 14, no. 10. | pl |
dc.description.references | Kasperska A., Problemy zastosowania sztucznych sieci neuronalnych w praktyce prawniczej, „Przegląd Prawa Publicznego” 2017, no. 11. | pl |
dc.description.references | Lattimore F., O’Callaghan S., Paleologos Z., Reid A., Santow E., Sargeant H. and Thomsen A., Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Technical Paper, Australian Human Rights Commission, Sydney 2020. | pl |
dc.description.references | Massey G. and Ehrensberger-Dow M., Machine learning: Implications for translator education, “Lebende Sprachen” 2017, vol. 62, no. 2. | pl |
dc.description.references | Michie D., Methodologies from Machine Learning in Data Analysis and Soft ware, “The Computer Journal” 1991, vol. 34, no. 6. | pl |
dc.description.references | Neff G. and Nagy P., Talking to Bots: Symbiotic Agency and the Case of Tay, “International Journal of Communication” 2016, no. 10. | pl |
dc.description.references | Ntoutsi E., Fafalios P., Gadiraju U., Iosifidis V., Nejdl W., Vidal M.-E., Ruggieri S., Turini F., Papadopoulos S., Krasanakis E., Kompatsiaris I., Kinder-Kurlanda K., Wagner C., Karimi F., Fernandez M., Alani H., Berendt B., Kruegel T., Heinze Ch., Broelemann K., Kasneci G., Tiropanis T. and Staab S., Bias in data-driven artificial intelligence systems – An introductory survey, “WIREs Data Mining Knowledge Discovery” 2020, vol. 10, no. 3. | pl |
dc.description.references | O’Neil C., Broń matematycznej zagłady. Jak algorytmy zwiększają nierówności i zagrażają demokracji, trans. M. Z. Zieliński, Warsaw 2017. | pl |
dc.description.references | Raji I.D., Buolamwini J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, ‘Conference on Artificial Intelligence, Ethics, and Society’ 2019, https://www.media.mit.edu/publications/actionable-auditing-investigatingthe-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/. | pl |
dc.description.references | Ribeiro M.T., Singh S. and Guestrin C., „Why Should I Trust You?” Explaining the Predictions of AnyClassifier, “22nd ACM SIGKDD International Conference 2016, San Francisco”, https://www. kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf. | pl |
dc.description.references | Rodrigues R., Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, “Journal of Responsible Technology” 2020, vol. 4. | pl |
dc.description.references | Roselli D., Matthews J., Talagala N., Managing Bias in AI, “Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA”, May 2019. | pl |
dc.description.references | Rutkowski L., Metody i techniki sztucznej inteligencji, Warsaw 2012. | pl |
dc.description.references | White Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, Brussels 2020. | pl |
dc.description.references | Yapo A. and Weiss J., Ethical Implications of Bias In Machine Learning, “Proceedings of the Annual Hawaii International Conference on System Sciences” 2018. | pl |
dc.description.references | Zuiderveen Borgesius F., Discrimination, artificial intelligence and algorithmic decision-making, Council of Europe, Strasbourg 2018. | pl |
dc.description.volume | 26 | pl |
dc.description.number | 3 | pl |
dc.description.firstpage | 25 | pl |
dc.description.lastpage | 42 | pl |
dc.identifier.citation2 | Białostockie Studia Prawnicze | pl |
dc.identifier.orcid | 0000-0003-1908-5844 | - |
Występuje w kolekcji(ach): | Artykuły naukowe (WP) Białostockie Studia Prawnicze, 2021, Vol. 26 nr 3 |
Pliki w tej pozycji:
Plik | Opis | Rozmiar | Format | |
---|---|---|---|---|
BSP_26_3_R_Rejmaniak_Bias_in_Artificial_Intelligence_Systems.pdf | 210,36 kB | Adobe PDF | Otwórz |
Pozycja ta dostępna jest na podstawie licencji Licencja Creative Commons CCL