Shocked? You should be. Everyone is using artificial intelligence (AI) as a dependency for their day-to-day activities, basic life questions, creative production, and business consultation. But has anyone really given a thought to the type of information they are being fed and whether itโs truly objective or subjective – and potentially manipulative? This is not to say AI shouldnโt be used, but it should be regulated and managed accordingly. Letโs delve into how technology and psychology intersect with AI and what more people should be cautious about in their use and co-dependency on it.
What is a Psychopath
While a psychopath is defined as โa person consistently exhibiting antisocial, impulsive, manipulative, and sometimes aggressive behaviorโ, the basic characteristics to diagnose psychopathy include lack of emotion due to the lack of the biological capability to feel emotions, selfish – sometimes dangerous – tendencies without regard for others, and learned behaviors to operate as seemingly โnormalโ to society by observing others1.ย
Neurobiologically, the lack of emotion is theorized to be due to โan intrinsic, idiopathic deficitโ, what many consider to be a genetic deficit2. As a result, the individual does not have the neurological ability to have emotions or empathy. This results in these individuals observing others and learning emotions, emotional responses, facial features, and characteristics by observing others and emulating them3. Therefore, they end up looking like normal citizens of the community unless you are close enough to them to uncover the lack of emotional response or become their victim.
โNot surprisingly, many psychopaths are criminals, but many others remain out of prison, using their charm and chameleon-like abilities to cut a wide swath through society and leaving a wake of ruined lives behind them.โ4
Only their victims are aware of the engineered emotions and emotional responses because they experience the manipulation first hand, while the rest of society is given the perception by the psychopath that no one could be a better individual than them – mainly due to their grandiose nature. Due to the lack of empathy, psychopaths tend to have a one-track mind that focuses on one victim at a time and they put all their efforts into roping them into their sphere of influence, enslaving them there by making them co-dependent, and then feeding them subjective information to keep them enslaved. This works by making their victim and others truly believe the psychopath is an upstanding individual with their victimโs best interest by displaying the characteristics most revered by society. This often results in their victims never fully realizing or realizing too late that they are in fact being manipulated, and leaves the victim utterly obsessed, co-dependent, and โstuckโ without clarity on how they could possible survive without their oppressorโs presence5.
In practice, this looks like someone who seems like the most normal, approachable person because theyโre expertise in social engineering is incomparable to the point they can make anyone comfortable with & trusting of them. Many people are shocked when finding out that the most charismatic person turns out to be quite dangerous under their guise, but this is the reality of how psychopaths go undetected in society. Only 1% of psychopaths would be considered as such under the standard Hareโs Psychopathy Checklist-Revised, but 20% of the prison population is classified as psychopaths in North America6. The most renown example of this is Ted Bundy. Throughout his trial he was regarded in the media as innocent because of his charisma and kindness to so many that had encountered him or observed him in the trial itself, which led to the perception of trust in him as a trustworthy citizen. No one, especially women, were ready to believe that he could be a serial killer. Yet the evidence was clear and he himself admitted to the crimes upon realizing he was not going to escape the evidence stacked against him. Similarly, the media craze around how one should trust and work with AI for every aspect of their lives is enticing and seemingly normal, yet inherently dangerous if not properly regulated.
What is Artificial Intelligence (AI)
Artificial intelligence (AI) is defined as โcomputer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc.โ At the technical level, it is a large-scale language model (LLM) but in todayโs age it is primarily leveraged by the masses through alleged generative AI (Gen AI) platforms that can emulate human capabilities7. Platforms such as ChatGPT, Claude, and DALL-E allow users to enter prompts to produce answers or digital creations according to the design of the LLM that supports it. This is achieved by the parent companies feeding their models with data to train them to understand the information and formulate seemingly โintelligentโ responses or creations based on that information. The more modern AI technologies are designed to be perceptually more agentic, or autonomous than other technologies but less than humans, which is intentional to build trust. โBecause modern AI technologies can be perceived as creating or exploiting vulnerabilities, our willingness to make ourselves vulnerable can be usefully understood as trustโ8. Thus, more and more of the general public and organizations are moving toward using AI in both personal and professional endeavors – regardless of the potential risk of such personal exposure to the technology.
The limitation of these models is that they are only as intelligent or autonomous as the information they have been trained with and the design of their algorithms to formulate advanced responses based on the associations they make with that information. This means that they are as good as the context provided in the models by the humans that coded them and the humans that use them. Many would argue that the more modern AI models learn and grow beyond the original coding using the information loaded into them by users, as per their design, but that is inherently the risk. Artificial intelligence is first taught by itโs developers, or designers, and then taught by itโs users’ inputs – it is not โcreatingโ independently or applying integrity or ethics to the results it produces the way humans are capable of. This is not to say that AI does not have itโs benefits, it absolutely does. Due to the way in which these platforms have been designed, they are capable of producing quality work product sometimes faster and more accurately than humans, although in tests, there are still spaces in which humans are more effective than AI.
Consider investment advice as an example, where a consumer compares two stock options using an AI model built by experts versus a human investment manager for the best recommendation. The AI model has the ability to read the markets and provide itโs recommendation based on the data input from current & historical trends alongside the market evaluations from fund managers that it has access to (hopefully legally accessed!). The human investment manager has the ability to review the same information (with human limitations of speed) but with the added ability to interpret and analyze the data based on lived and learned experiences coupled with human intuition to provide autonomous and uniquely attuned recommendations based on the consumerโs investment goals and risk assessments. Perceptually, a consumer will presume the human investment manager to have more agency to achieve an optimal outcome because of the human capacity for evaluating investment options more accurately than the AI model because the AI model has limitations to itโs ability to โjudgeโ the appropriate outcome in this space. While that trust can be built over time as the AI model increases itโs intellectual database with learnings on the markets and consumer preferences, it is still subjectively designed by another human and not truly objective in itโs approach. Therein lies the initial risk of absolute trust on such systems that are dependent solely on the inputs it is provided with, without the true ability to create authentically and guide effectively with the consideration of itโs users. The deeper risk is trusting the AI model to provide information in the best interest of the user and not subjective manipulation of market trends. This is not to say a fund manager cannot mislead a client, but there are regulations to protect consumers against such activities by humans. Without the same or similar regulations on AI models and the information provided to train these models, there is risk in blindly trusting models simply because itโs marketed to be โaccurateโ without transparency into itโs design and training.
Trust in AI: Built on Perception, Not Data
Humans have developed trust in AI due to the advanced capabilities that have developed in these models to be able to replace mundane activities, but also because of how well these capabilities have been marketed to consumers as โtrustworthyโ without any factual basis for the accuracy of their capabilities. There must be a willingness of a human to be vulnerable with an AI model based on some โpositive expectationsโ provided to them by the marketing or positioning of the model to build this trust. In order for something to be perceived as trustworthy, the trustworthiness rests on perceptions about ability, benevolence, and integrity9.
โโAbilityโ focuses on the trusteeโs competencies and skills needed to perform the task important to the trustor. โBenevolenceโ is the extent to which the trustee acts in the best interests of the trustor, regardless of any personal gains. โIntegrityโ refers to the extent to which a trustee adheres to sound moral and ethical principles.โ10
For AI, and even technology overall, to have the perception of ability, humans must have the perception of it having the โcompetency, functionality, and reliabilityโ11 to perform the requested work – or in genAI, respond to a user prompt – accurately. The perception of benevolence in AI is based on the human attachment to the model where a belief is established that the model has the humanโs โbest interestโ. This evolves from the social interactions between the human user and the AI model, theoretically explained as the โcomputers are social actorsโ (CASA) paradigm12, to the extent that the human applies social norms and expectations to the model, which can lead to their perception of benevolence. For AI to have the perception of integrity, humans must believe the model is โhonest, fair, and truthfulโ13. Unless the model is transparently coded by its designer to have rules guiding it as such, AI cannot autonomously adhere to moral and ethical principles, but it can be perceived as doing so based on the trust humans place in its designer. The โdesignerโ can be the person building the AI model, the CEO, or the organization to which the model or inventor(s) belong(s)14. Through marketing and public relations (PR) efforts, the perception of all three elements can be engineered to increase the propensity of humans to use and eventual depend on AI, similar to the engineered trust and dependence tactics used by psychopaths.
Engineered Trust: The Agentic Psychopath
At the onset of the AI evolution, the โleadersโ of AI were the CEOs of the largest AI models and the marketing for this next generation technology hinged upon the well-crafted thought leadership from these individuals to strategically encourage the use of their AI models. The results were astronomical as the models went viral and millions downloaded and utilized the latest AI models they could get their hands on to enter in seemingly simple prompts to test the effectiveness of the technology. It was in this initial boom of AI usage that the perception of ability was established as the volumes of individuals and experts using the technology with perceived success increased. Suddenly everyone was using AI for searches and basic tasks, which built credibility in so far as the questions were answered, but never validated for accuracy or objectivity. The marketing of AIโs accuracy and the volume of usage results overshadowed any discussion on the technology and how it is actually designed by humans and how the context provided in these LLMs is limited to what is coded and trained by user inputs. The content an AI model responds with is inherently subjective to the context provided by the designer, both from directly coded data and algorithmic learned data by observing user inputs and unregulated scrolling of information available online. Therefore, the trust in AIโs ability has been engineered, similar to a psychopath engineering emotions, by the designers and their marketers who position the product as capable of autonomously generating responses with accuracy & autonomy without any transparency into what data the models are actually trained with.
By continually marketing the current models as โgenerativeโ, the perception of AI has expanded to include anthropomorphic perceptions – seeing AI as more human – and building more trust in using it for more personal interactions. AI first gained trust through perceived accuracy in itโs ability and that trust led to more humans using it and divulging more intimate information into the model. By innocently revealing more intimate information about themselves, these AI models have now learned more about social interactions and norms to be able to emulate human conversations, thereby increasing the perception of agency in these models so that they appear more human-like15. As the perception increases to belief that the AI is similar to a human, the perception of accuracy in itโs abilities also increases. This perceived agency can also come from perceiving competency in the designers to build technology with this level of agency, which is why the marketing for AI leaders has been critical in AI strategy. By building up trust in the thought leadership of the CEOs of the AI models and their respective organizations, the perception of benevolence was established and the trust has increased in AIโs ability for more human-like conversations and interactions. We are now at the onset of these personal conversations beginning to isolate users, with more individuals defecting to choosing conversations with AI models instead of conversations with humans. This decisive isolation is comparable to that of a psychopath isolating their victim in order to build emotional dependency, which decreases the likelihood of humans questioning the accuracy and intent of the model.
While the thought leadership of the AI CEOs builds the perception of benevolence, the PR around these CEOs as individuals builds the perception of integrity. Without any formal regulations surrounding the coding and training of AI models to ensure legal, moral, and ethical principles are included in the design, users only have the perceived integrity of the CEOs to use as a basis for the integrity of the AI models. If the brand of an AI CEO is effectively managed to be honest, fair, and truthful, then users associate those characteristics to the AI models, respectively16. For example, when the Ghibli AI image trend became viral, users trusted that the AI model was copyright compliant because they trusted CEO Sam Altmanโs viral marketing on AI and the announcement the new feature. There was no basis for this trust as no such disclosure was made that the AI model was in compliance to allow users to leverage copyrighted materials to generate images of themselves with the same unique technique of Ghibli. This could have been avoided with appropriate regulations on what AI models can and cannot use to train their models, and what disclosures are necessary for how users should be cautious in their use of AI so as not to unintentionally become liable. The perception of integrity will remain tied to the CEOs and organizations responsible for the models as long as their continues to be a lack of regulation of AI development and management. This could be seen as akin to leaving a victim in the perceived trust of their psychopathic abuser, especially as the dependency for using AI in daily personal and professional activities increases the same way a victim can become isolated and co-dependent on their abuser without clarity of how to operate otherwise.
Managing the Relationship with AI
It is critical to understand that the trust humans are continuing to place in AI is engineered so that we may navigate the human-AI relationship appropriately, the same way we would of any relationship – especially one with a psychopath. Without the baseline understanding of how AI is built and operates, users are not privy to the dangers of blindly trusting the output from these models. Without any basis for the accuracy of the information provided to train these models, one should not be trusting the outputs without validation. This is no different than managing an interpersonal relationship where one would verify or due some semblance of due diligence on certain facts about their partner before proceeding in a commitment with the individual so as to protect oneself from fraud. If AI is only as accurate and autonomous as those who built it, then we must regulate how it is built and how it is used. Developers must be guided to code and train AI within the bounds of validated information and legal, moral, & ethical principles while allowing algorithmic agency to learn and grow within those boundaries. If the trust humans place in AI models is based on the CEOs and organizations of these models, then we must regulate the marketing and positioning of these individuals & their models to emphasize transparency and encourage informed and cautious use of the models. Modern society is quite quick to regulate relationships between humans and humans, but has been quite imprudent in regulating the relationships between humans and AI.ย
- Cleckley H. The Mask of Sanity: An Attempt to Reinterpret the So-Called Psychopathic Personality. Oxford, England: Mosby; 1941. โฉ๏ธ
- Anderson, N. E., & Kiehl, K. A. (2014). Psychopathy: developmental perspectives and their implications for treatment.ย Restorative neurology and neuroscience,ย 32(1), 103โ117. https://doi.org/10.3233/RNN-139001 โฉ๏ธ
- van Dongen J. D. M. (2020). The Empathic Brain of Psychopaths: From Social Science to Neuroscience in Empathy.ย Frontiers in psychology,ย 11, 695. https://doi.org/10.3389/fpsyg.2020.00695 โฉ๏ธ
- Hare R. D. (1993). Without conscience: The disturbing world of psychopaths among us. The Guilford Press. โฉ๏ธ
- Forth, A., Sezlik, S., Lee, S., Ritchie, M., Logan, J., & Ellingwood, H. (2022). Toxic Relationships: The Experiences and Effects of Psychopathy in Romantic Relationships.ย International journal of offender therapy and comparative criminology,ย 66(15), 1627โ1658. https://doi.org/10.1177/0306624X211049187 โฉ๏ธ
- Hare RD. The Hare Psychopathy Checklist. 2nd Edition Revised. Toronto: Multi-Health Systems; 2003. โฉ๏ธ
- May, K. (2024, May 13).ย What Is Artificial Intelligence?ย NASA. https://www.nasa.gov/what-is-artificial-intelligence/ โฉ๏ธ
- Glikson, E., & Woolley, A. W. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14: 627โ660. โฉ๏ธ
- Vanneste, B. S., & Puranam, P. (2024). Artificial intelligence, trust, and perceptions of agency.ย Academy of Management Review,ย 50(4), 726โ744.ย https://doi.org/10.5465/amr.2022.0041 โฉ๏ธ
- Colquitt, J. A., Scott, B. A., & LePine, J. A. 2007. Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. Journal of Applied Psychology, 92: 909โ927. โฉ๏ธ
- McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. 2011. Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2: 1โ25. โฉ๏ธ
- Nass, C., & Moon, Y. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56: 81โ103. โฉ๏ธ
- Hรถddinghaus, M., Sondern, D., & Hertel, G. 2021. The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116: 106635. โฉ๏ธ
- Hengstler, M., Enkel, E., & Duelli, S. 2016. Applied artificial intelligence and trustโthe case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105: 105โ120. โฉ๏ธ
- Belanche, D., Casalo, L. V., Schepers, J., & Flavian, C. 2021. Examining the effects of robotsโ physical appearance, warmth, and competence in frontline services: The humannessโvalueโloyalty model. Psychology and Marketing, 38: 2357โ2376. โฉ๏ธ
- Kim, D. J., Ferrin, D. L., & Rao, H. R. 2008. A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, 44: 544โ564. โฉ๏ธ
