The work shown on this page was financed by the National Science Centre, Poland. The project is titled “Botonomics: the influence of intelligent machines on economic behavior” (grant number: 2021/42/E/HS4/00289).

Working papers

Niszczota, P., & Antoniou, E. (2026). Do people expect different behavior from large language models acting on their behalf? Evidence from norm elicitations in two canonical economic games (No. arXiv:2601.15312). arXiv. https://doi.org/10.48550/arXiv.2601.15312

Niszczota, P., & Grützner, C. (2026). Antisocial behavior towards large language model users: Experimental evidence (No. arXiv:2601.09772). arXiv. https://doi.org/10.48550/arXiv.2601.09772

Published papers

Niszczota, P., Janczak, M., & Misiak, M. (2025). Large language models can replicate cross-cultural differences in personality. Journal of Research in Personality, 115, 104584. https://doi.org/10.1016/j.jrp.2025.104584

Niszczota, P., & Abbas, S. (2023). GPT has become financially literate: Insights from financial literacy tests of GPT and a preliminary test of how people use it as a source of advice. Finance Research Letters, 58, 104333. https://doi.org/10.1016/j.frl.2023.104333

Niszczota, P., & Conway, P. (2023). Judgements of research co-created by Generative AI: Experimental evidence. Economics and Business Review, 9(2). https://doi.org/10.18559/ebr.2023.2.744

Niszczota, P., & Rybicka, I. (2023). The credibility of dietary advice formulated by ChatGPT: Robo-diets for people with food allergies. Nutrition, 112, 112076. https://doi.org/10.1016/j.nut.2023.112076