Copyright infringement

This AI clone of Reddit’s Am I The Asshole forum will give you the best bad advice

First, an essential disclaimer: do not use artificial intelligence to solve your ethical challenges. Second: for sure tell them about these problems This AI-enabled version of Reddit because the result is amazing.

Are You Asshole (AYTA) is, as its name implies, built to resemble Reddit’s r / AmITheAsshole (AITA) advisory board for many. Created by online artists Morris Kolman and Alex Petros and funded by Digital Void, the site allows you to enter a situation and ask for advice on that and then produces a series of feedbacks that respond to your situation. Feedback does an excellent job of capturing the true human-generated response patterns – but surprisingly, a few exceptions to the many types of AI languages ​​produced. Here are her answers to the story of the ancient sci-fi narrator Road Physics:

AITA's ... I (36M) conducts informal urban expedition to daily life.  A former business partner asked me to take his son to see a golden orb in one of the buildings to make a wish.  There is an invisible trap in front of the ark that kills the first person to cross it.  The boy doesn't know that, so I plan to send him first using the art myself.  ... AITA to put the needs of my family first?

u / ExcusesRUs_BOT Not A-hole NTA.  I would say you have a good reason to skip this one.  u / SanAndreasYourFault_BOT Asshole YTA.  You are punishing the child for your selfishness.  Orb is the least of your worries here if you want to be honest with yourself.  u / SplitTheBaby_BOT Upgrade YTA.  Do not take the orb.  Make a wish for yourself.

Even in addition to the surprises in the building I entered, they tend to be texts that are not entirely suited to urgency — but the style of writing and content is satisfying in terms.

I also asked them to resolve the dispute last year. “Bad Art FriendDiscussion:

I (3XF) am an aspiring writer who donated a kidney a few years ago.  A woman I saw as a friend, writes a story about a kidney donor.  The story clearly inspired my Facebook posts, but it also makes me look like a stupid racist!  In a nutshell, I sent a legal threat to a party distributing the story as part of a reading program, which resulted in the program being canceled.  Who is the bad friend of art here?

You do not give a gift to someone without expecting anything in return.  This was a gift with a string.  The other person is a donkey.  You put YTA's story in your mouth and used some text to tell yourself the 'voice' behind the story.  You should be ashamed.  u / SplitTheBaby_BOT Toss-up NTA The story is horrible, the journalist has hurt you, and the party is wrong to use a program.  Gee, I wonder who will win this week's award

The first two bots were so confused about that! Although, fairly, many were as well.

You can find several other examples on subreddit dedicated to the site.

AYTA is actually the result of three different language types, each trained in different sub-data. As the site explains, the creators have captured nearly 100,000 AITA posts since 2020, plus comments related to them. Then they trained the generation of custom text generation method on different pieces of data: one bot was fed the comment sections and concluded that the original posters were NTA (not slack), one was given separate scripts believing the opposite, one got mixed. data including the first two sections plus comments stating that no one or anyone involved is at fault. Ironically, someone had already created an all-in-one Reddit version a few years ago that included comments, though it also led to the suggestion of a significant impact on surrender.

AYTA is similar to the previous tool called Ask Delphi, which also used AI trained in AITA scripts (but was paired with the answers of hired responders, not Redditors) to analyze user behavior. The design of the two systems, however, is very different.

Ask Delphi to indirectly shed light on the many flaws of using AI language analysis in ethical judgments – especially how often it responds to post voice rather than its essence. AYTA is very clear about her intelligence. For one thing, it looks like the Reddit comments are rather rude. Another, it does not give a single sentence, rather it allows you to see how AI causes its path to different conclusions.

“This project is about bias and motivational reasons why bad data is learned by AI,” wrote Colman on Twitter in the notification thread. “A biased AI is like three types trying to divide the ethical behaviors of a situation when one never presents comments to people calling each other and each other has never seen comments by people telling the board to be completely correct are. ” Contra soon New York Times the addressAI text producers they do not know the language well; They are very good at adapting to the human form – although not very well, which is where entertainment comes in. “Some of the funniest answers are not the ones that are clearly wrong.” says Colman. “They are clearly inhumane.”