Example A from Central Florida:
We, here in Central Florida, have given up on this neighborhood social networking site. They disable accounts willy-nilly. My account was disabled because of this entry. “Our downtown is full of crime. We need more police presence. I am fearful of going downtown.” They called this entry “uncivil.”
Example B from Indiana:
I am a community leader. I complained about trash about our city streets and was disabled by this networking site. I received no notice. They called my submission “unneighborly.”
Example C from Texas:
I do not understand this social networking site. They keep disabling my account for 30 days but never explain why.
It is odd that the first moment on the social networking site that I become a little critical about my local government, my account is inexplicably disabled. It is as if the local government has control over this website and is a censor?
Social networking websites have become popular. Social media has become a vital part of public discourse. In DC, politicos claim that they read such neighborhood networking sites as Nextdoor and pay attention to them. The social networking site Facebook has more than 3.03 billion active users. Some people rely on Facebook for family networking.
All of these networks have guidelines, but these guidelines are elusive and general, such as “be civil.” They need a fair, transparent complaint management system that contains due process procedures.
Presently, the way they manage complaints seems quite arbitrary and inexplicable. It is so not discernable that such evaluative sites such as Better Business Bureaus (BBB), ComplaintsBoards.com, or SiteJabber.com rate the social networking sites as one star of five in consumer satisfaction. It is clear they need complaint management experts’ guidance and assistance.
Facebook’s Complaint Management per AIchat
Facebook had a complaint management system in place to handle user complaints and reports of various types of content and behavior on its platform. This system allowed users to report issues such as:
1. Harassment or bullying: Users could report instances of harassment, cyberbullying, or abusive behavior from other users. 2. Hate speech and offensive content: Reports of hate speech, offensive language, or content that violated Facebook's community standards could be submitted. 3. Spam and fake accounts: Users could report spam-y or fake accounts that engage in deceptive or harmful activities. 4. Impersonation: Facebook allowed users to report instances of impersonation, where someone was using their name or identity without permission. 5. Copyright infringement: Users could report copyrighted content that was being used without permission. 6. Privacy violations: Reports of privacy violations, such as the sharing of personal information without consent, could also be submitted.
To report such issues, users typically had to follow these steps:
1. Locate the content or profile you want to report. 2. Click on the "..." (three dots) icon on the top right corner of the post or profile. 3. Select the "Find support or report post/profile" option.
4. Follow the on-screen prompts to provide details about the issue and submit the report.
Facebook would then review these reports and take appropriate action, which could include removing content, disabling accounts, or taking other measures to address the reported violations.
COMMENT: One will note that there is a lack of explanation as to the process once the complaint is filed.
-Who will review?
-Is there an appeal?
-Is there an opportunity for the writer to explain?
-What is the time period?
-How is the appropriate action determined?
The Complaint Management System of Twitter aka X per Chat.OpenAI
Twitter had a complaint management process in place to handle user reports of various types of content and behavior that violated its rules and policies. The process generally involves the following steps:
1. Locate the Tweet or Account: Find the specific Tweet or Twitter account that you want to report. You can report individual Tweets, profiles, or direct messages. 2. Report the Issue:
· For Tweets: Click on the downward arrow icon (⌄) on the top right corner of the Tweet and select "Report Tweet." Follow the on-screen prompts to provide details about the issue. · For Profiles: Visit the profile of the account you want to report, click on the three-dot icon (•••) on the right side of the cover photo, and select "Report." Follow the on-screen prompts to report the profile.
3. Select the Type of Issue: Twitter will ask you to specify the type of issue you are reporting. Common options include harassment, hate speech, impersonation, spam, and more. 4. Provide Details: You'll be prompted to provide additional details about the issue, such as specific information related to the violation. 5. Submit the Report: After providing the necessary information, submit your report. Twitter will then review the report and take appropriate action based on its policies. 6. Optional Block or Mute: In some cases, Twitter may suggest that you block or mute the account you're reporting. Blocking prevents the account from interacting with you, while muting hides their Tweets from your timeline. 7. Follow Up: You can check the status of your report by going to your "Support Dashboard" on Twitter, where you can see the progress of reported issues.
Twitter's response to reported violations can vary depending on the severity and nature of the issue. Actions taken by Twitter may include issuing warnings, suspending accounts, or removing specific content.
COMMENT: The same questions as in Facebook are present in this Twitter description.
Pinterest Complaint System
If you find content that shouldn't be on Pinterest, you can report it anonymously for us to review. You’ll still be able to see someone’s account, boards, Pins and comments after you report them. To see less of their content, you can unfollow or block this person. You can learn more about what's allowed on Pinterest in their Community guidelines.
Report a Pin
1. Click into the Pin
2. Click at the top of the Pin
3. Click Report Pin
4. Pick your reason for reporting, then click Next
5. Click Done
Report an account=5 steps.
Report a comment or photo comment=5 steps.
To report a photo comment for Intellectual Property rights, contact Pinterest. Include the Pin URL and the username of the person who posted the comment in your message.
Report a message=5 steps.
Report a board=5 steps.
Why would Pinterest deactivated my account?
Accounts may be suspended due to single or repeat violations of Pinterest's Community Guidelines concerning: Pinner Safety (for example: hateful speech, pornography, graphic imagery and misinformation), Account security (including impersonation and third-party logins), and Spam.
Can someone see who reported them on Pinterest?
The only time they'll know they've been reported at all is if their content gets removed as a result. Pinterest will always be able to tie reports back to your account, so abuse of reporting could lead to issues. But if you're worried about a user finding out you've reported them, that's not an issue.
Pinterest isn’t a place for antagonistic, explicit, false or misleading, harmful, hateful, or violent content or behavior. Pinterest may remove, limit, or block the distribution of such content and the accounts, individuals, groups and domains that create or spread it based on how much harm it poses. You can read more about how Pinterest put their Community Guidelines into practice on their Enforcement page here.
Pinterest says they're committed to presenting you with clear and transparent expectations that are easy to understand and follow. If you have questions or encounter problems on Pinterest, please contact us.
Pinterest isn’t a place for adult content, including pornography and most nudity. They remove or limit the distribution of mature and explicit content, including:
Sexualized content, even if the people are clothed or partially clothed
Graphic depictions of sexual activity in imagery or text
Video Games such as Call of Duty Also Need Specific Guidance
Call of Duty is a very popular gaming franchise with over 250 million players. Much like the social networking sites, their complaint process is not transparent and quite general.
What are the rules for chat in Call of Duty?
If the speech is found to violate Call of Duty's official Code of Conduct, which bars “derogatory comments based on race, gender identity or expression, sexual orientation, age, culture, faith, mental or physical abilities, or country of origin,” players will be given the appropriate moderator actions
This code of conduct talks of:
- Treat with respect.
- Compete with integrity.
Using a voice chat moderation process called ToxMod AI, they have restricted over one million users. They first issue warnings and then activate other penalties such as limiting features.
Much like the social networking sites, users do not know exactly why they are being warned or restricted.
Could Call of Duty not share the same specifications they give to ToxMod?
A Proposed Complaint Management System
As one examines the above complaint systems, one can speculate that they all mimic each other. They all lack transparency. They value anonymity. There is very little notification or specification. They use elusive generalities such as be civil, be neighborly, avoid antagonistic speech. Users are bewildered. They feel victimized. Because of the lack of specifics, they do not know what mistakes they may have made. These systems allow enemies or governmental entities to attack or mute users.
Complaint Management Experts might recommend the following. The complaint management system would be:
- Easy to use,
- Fair, and with
- Due process procedures.
Step One: When a complaint is received, it will be clearly attached to a name or handle. This named complaint will be reviewed initially online by three named and trained persons. If deemed legitimate by 2 or more, a specific explanation will be sent to the complainee with an opportunity to respond. In most cases, this will resolve the situation.
Note on using names in the complaint system: Some may believe that including names has ramifications. Overriding these concerns is the need for transparency and responsibility.
- When one uses Uber or Lyft, the name of the driver is clear.
- Even the Chick-Fil-A register worker’s name is on the receipt.
- It has been learned that even the names of Grand Jurors in Georgia must be released.
So, surely the risk of revealing complaint management names is overcome with the need for reliability and responsibility. If one does not want their name revealed and don’t want to be responsible, they should hesitate to voluntarily participate in this complaint process.
Step Two: If the entry writer (complainee) does refute, this could be scheduled for online mediation. The mediation might include three parties: The complainer, the complainee and network staff. The mediator would be an outside mediation. Most likely, 95% of the time, this will be resolved.
Step Four: If not, this complaint could be submitted to a 3 person arbitration board online. The arbitrators would be outside the network and their decision would be binding. These arbitration decisions will be public to the network members so members will completely understand the guidelines and how the system works.
Penalties Transparent and Logical: There could be a wide variety of escalating penalties. The first should start with a warning indicating specifically what the issue is. If another transgression occurs, maybe the next step could be disabling the account for a number of days, then weeks, then months, etc.
Specificity: General terms such as “being neighborly” or “being civil” or “no name calling” are okay to place in the guidelines if they are followed by specifics.
- For example, is calling a neighborhood “chaotic” uncivil? NextDoor seems to think so and disabled an account because of this.
- What about calling a community block “crime ridden?” NextDoor seems to think this is unneighborly and thus disabled an account.
- There may be 200 synonyms and antonyms for a “mean” person. Examples include scoundrel, wretch, tyrant, villain….Are all of these banned under the metric of “no name calling?”
-What about calling someone a “buttinsky,” “gadfly,” “troll,” NIMBY….?”
One might quote former U.S. Supreme Court Justice Potter Stewart who famously stated that he knows pornography when he sees it.
Many often say that the meaning of speech is in the eye of the beholder. These quotes are unacceptable and unhelpful.
Of course, there is nothing magic about the above suggested complaint system so it may vary widely.
Reviewers must keep up to date with trends. It is important to keep up to date. For example, ten years ago the “F” word was taboo. Today, not so much. It is found in 10% of the pop songs. Even a Canadian court has ruled that society is accepting of the term, “F.”
The present complaint management system of most social networking sites is not clear, not transparent, unfair, not ascribing to due process, too general and too vague. This enables, perpetuates, and allows for political censorship, neighborhood battles, petty workplace squabbles, chronic complainants,* and personal vendettas, all perpetuated under the guise of anonymity and secrecy.
Instead, the networks could have an effective complaint management system that exudes transparency and due process.
*The Washington Post 10/9/23 reported on a chronic complainer. This complainer filed 71 of 78 complaints to ban books during the year of 2022 in Spotsylvania, Virginia. Chronic complainants need to be identified and managed. Transparency and due process in the complaint management process accomplish this.
See Recommended Books under “Blogs” drop down menu.
Roy J. Lewicki is the author of 'Essentials of Negotiation', published 2015 under ISBN 9780077862466 and ISBN 0077862465. Publisher: McGraw Hill Higher Education
The Conflict Resolution Training Program, Leader’s Manual, ISBN: 0-7879-6077-2. Prudence Bowman Kestner and Larry Ray
5 Languages of Appreciation in the Workplace.
Getting Your Way Every Day.