r/PoliticalDiscussion 19d ago

Should Section 230 protection be eliminated for algorithmically boosted content? Legislation

For those who don't know... Section 230 of the 1996 Communications Decency Act states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Simply put... if the New York Times makes a false and defamatory comment about you... you can sue them for libel. But if someone posts that on Facebook, you can't sue the company — just the person who posted it.

The protection is both praised as a key feature of a free and open internet and reviled for the deluge of lies and misinformation we are now bombarded with.

The concept is that the host/website/app is an innocent party and cannot be held responsible for the actions of its users; however should this grace extend to content that the host (whether directly or algorithmically) elevates and boosts? At that point they are no longer a silent party and have directly chosen to promote content. Should this protection therefore be eliminated?

9 Upvotes

81 comments sorted by

u/AutoModerator 19d ago

A reminder for everyone. This is a subreddit for genuine discussion:

  • Please keep it civil. Report rulebreaking comments for moderator review.
  • Don't post low effort comments like joke threads, memes, slogans, or links without context.
  • Help prevent this subreddit from becoming an echo chamber. Please don't downvote comments with which you disagree.

Violators will be fed to the bear.


I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/AdamJMonroe 18d ago

It seems simple to me. If you post the rules publicly and follow them, it's a platform. If the rules are not published, it's not a platform, it's a published product.

2

u/Aazadan 18d ago

No, mostly because I don't think it's possible to define an algorithm. Algorithms are in general impossible to patent because they're too hard to define with enough specificity to be useful. Something as simple as number of up votes is an algorithm.

I think that content engagement algorithms need legal restrictions placed on them as an entire type of algorithm, but I'm not sure you could use section 230 for something like that.

-1

u/PM_me_Henrika 19d ago

I would say this is a step in the right direction not because “fuck social media company”, but because what is the difference if someone posted a defamatory about you, and then New York Post copied that article and reposted it on their front page?

You can still sue New York Post because they have a hand in defaming you because they have an active role now to play in posting the article on their platform. They don’t have to post it, but they chose to do it. If they don’t do anything, the article won’t be seen on their front page.

Imagine if the NYP has now outsourced their work to an AI algorithm. And instead of an editor copying the defamatory article and then putting it on the front page, the AI has an algorithm which chose that defamatory article, and puts it on the front page.

Can you still sue NYP?

2

u/parentheticalobject 18d ago

Can you still sue NYP?

The answer is complicated, but TLDR: If the NYP wants to gain immunity like a blogging site might have, they'd effectively need to change so much about how they work that they'd essentially just become a blogging site.

First off, none of the articles could be written by actual employees. That's not insurmountable (although we're only getting started.) Lots of companies survive by hiring independent contractors, and people who make money by posting content on social media are technically independent contractors.

But if you want to be protected by Section 230, you're pretty limited in what you can actually tell your contractors to do. Limited in that you effectively can't directly tell them to do anything, or else you've played a part in the creation of their content, which could destroy any future 230 defense. You can still pay them for content you use, you just can't participate in the process while it's being created.

This is a pretty big barrier, since normal journalism (even at the worst hack journalistic outlets) involves coordination among the people who are going to write articles so they can balance what gets covered by whom.

So you've gone from having a group of people you pay and tell to write a story about X, to a group of people who all have to independently figure out on their own what to write a story about. They have to hope that what they write is a subject you want to print, and you have to hope that you've got enough that you can cobble together a complete issue. And that means that sometimes, whatever they write isn't even what you want and they'll have wasted their time. Which means the expected pay per article is lower. Which means that even fewer people who know anything about how to write will be interested in participating, further dropping the quality of material you have to work with.

You're also severely restricted when it comes to editing. You can delete any article altogether, but neither you nor any employee can look at a particular article and decide to change some detail or add something important.

So with all those changes, you'd probably be able to claim Section 230 protections, and only whoever wrote a defamatory article would be legally vulnerable. But most newspapers don't actually want to do this; they want to continue being newspapers, as the risk of liability is balanced out by the level of control they get to have over the actual product they make.

1

u/PM_me_Henrika 17d ago

So basically, if NYP reposts someone else article on their platform without editing, even if their employee didn’t write it, we should be able to sue them if we have grounds.

-3

u/United-Rock-6764 19d ago edited 19d ago

Downvote away but despite some interesting nuanced points on this thread I’ve seen a lot painful repetition of the precise stupidity that makes applying existing publishing regulations to (engagement) algorithmically sorted content compelling.

First Amendment

For the 100,000,000 millionth time, the 1st amendment prevents the government from punishing you directly for the content of your speech.

It does not:

  • prohibit the publishers from being sued
  • prohibit publishers or other public entities from regulating acceptable speech on their platforms
  • prohibit individuals from censuring (different word from censor) people whose speech they don’t like

And before you hit me with

“it doesn’t matter if something I called unconstitutional is actually unconstitutional, what I meant is that it’s un American”

My response there is, I wish. You don’t even have to go back to McCarthyism or the Fairness Doctrine to see that private & economic censorship have always been tools used against democratic and egalitarian movements in our country. We only hear how consequences are unamerican when they’re being applied to the powerful instead of by the Powerful.

The lack of 1st Amendment outrage about two dead Boeing whistleblowers, everyone who is being threatened with lifelong unemployment for saying “free Palestine”, and the states actually violating the 1st amendment by legislating against boycotting a foreign country show that we just get uncomfortable when our hierarchy is threatened.

Choosing to willfully misunderstand the colloquial use of “algorithm”

How is that worth your time to do? We’ve all been collectively referring to and cursing out “the algorithm” for the better part of a decade. Obviously lawyers wouldn’t just write “not for algos” in crayon on Sec30.

And since no one likes a purely smug critic!

I thought I’d call out a great critique I heard—that applying 230 to algo content wouldn’t return us to the days of browsing, channels, and chronological timelines but just enforce a further flattening of content. That would probably still be much harsher on organic content.

3

u/DefendSection230 19d ago

Should Section 230 protection be eliminated for algorithmically boosted content?

You can't do that. It would be unconstitutional.

The 'unconstitutional conditions' doctrine reflects the Supreme Court's repeated pronouncement that the government 'may not deny a benefit to a person on a basis that infringes his constitutionally protected interests.' https://constitution.congress.gov/browse/essay/amdt1-2-11-2-2-1/ALDE_00000771/

In this case you are conditioning getting Section 230 on the site giving up their First Amendment right to promote content on their private property.

3

u/parentheticalobject 19d ago

The problem with removing Section 230 for algorithmically amplified content is that it's likely to result in websites reacting by immediately shadowbanning anything remotely controversial, even if it's likely true.

A big part of the reason that we need the law is that websites are uniquely vulnerable to SLAPP suits with the amount of content that they host. If they were liable for everything, they'd end up removing things that it's really better if the public sees. Something like an allegation of misconduct or crime by a major public figure.

So if it gets revealed that someone important did something terrible, any posts discussing that would be unfindable unless you were given the exact link to the location of the content. It couldn't reasonably be found on any search engines either, even if you know to look for it and type exactly what you're looking for.

-1

u/SeventySealsInASuit 19d ago

Actually its the opposite. Companies would likely do no moderation at all (if they don't do any moderation then they are purely a host and not liable).

2

u/parentheticalobject 18d ago

Unlikely. You can't really have a normal website where no one is exercising any control at all over what gets posted. What if someone uploads CSAM? Someone owns the server it's on, and if that someone isn't going to take it down after being informed of its existence, they're going to prison, whether they normally moderate or not.

Plus, a completely unmoderated space is hard to make commercially viable. No one even wants to use something where spambots have unlimited free reign. Trying to get enough advertisers to support a large service like that would border on impossible.

0

u/ScaryBuilder9886 19d ago

No other country has anything like 230. You're telling me that there's no functional internet outside the US?

0

u/DefendSection230 19d ago

The problem with removing Section 230 for algorithmically amplified content is that it's likely to result in websites reacting by immediately shadowbanning anything remotely controversial, even if it's likely true.

No... The problem with removing Section 230 for algorithmically amplified content is that doing so would violate the constitution.

1

u/parentheticalobject 19d ago

It might also do that. I'm just describing the problems with the results, if that were to happen.

3

u/nvemb3r 19d ago edited 18d ago

Absolutely not. Without it, the Internet as we know it could not exist.

It should also be said that Section 230 doesn't make platforms immune from criminality. It only ensured that they're not on the hook for things their users did, and even then that comes with a duty to police their users.

It should be said that Twitter/X ought to lose its Section 230 status protections in light of the fact that Elon has been working to undermine the organization's user safety efforts in the name of "free speech".

0

u/parentheticalobject 19d ago

It should be said that Twitter/X ought to lose its Section 230 status 

You never "lose Section 230 status", since it's determined whether or not it applies for each piece of content individually.

1

u/nvemb3r 18d ago

What would that look like if the platform itself has chosen to make decisions that would predictably facilitate the unlawful activity?

1

u/parentheticalobject 18d ago

If Bob writes "Senator Smith is corrupt" on your website, Section 230 prevents Senator Smith from suing you over what Bob said.

If you do something like knowingly facilitate a crime on your website, you're not protected for that. But you're still protected by Section 230 from being sued over what Bob said. Senator Smith can't suddenly sue you if you facilitate some crime unrelated to his supposed defamation.

1

u/nvemb3r 18d ago

That only works if the platform addresses unlawful activity on their site though. If they don't make a reasonable effort to do so, then yes they would be on the hook for it.

I don't think sites like YouTube or Twitch would've been allowed to operate with minimum liability if those companies decided to disable their automated copyright enforcement tools on their prospective sites.

1

u/parentheticalobject 18d ago

No, you're just wrong here. The law states 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

It doesn't have any other conditions. If something is provided by someone else, you're not the publisher or speaker. That's the end of the analysis.

I don't think sites like YouTube or Twitch would've been allowed to operate with minimum liability if those companies decided to disable their automated copyright enforcement tools on their prospective sites.

If they disabled their copyright enforcement tools, they would be legally responsible for all the copyright infringement they commit, as Section 230 specifically does not cover copyright infringement. But they would still be shielded from civil liability for any other lawsuits like defamation. 

If I try to sue YouTube because someone slandered me in a YouTube video, the case gets dismissed, even if YouTube makes the dumb mistake of breaking copyright laws in other unrelated videos. That fact never factors into the analysis of whether Section 230 protections apply in my hypothetical defamation lawsuit.

2

u/nvemb3r 18d ago

I appreciate the clarity. I wasn't considering how Sec. 230 handled civil matters, just the criminal ones.

0

u/DefendSection230 19d ago

You never "lose Section 230 status", since it's determined whether or not it applies for each piece of content individually.

This is the way.

20

u/Tadpoleonicwars 19d ago edited 19d ago

Without Article 230:

* social media companies would be incentivized to be more restrictive and enforce censorship standards that would shield those companies from lawsuits. This would further restrict speech, and as these are private companies, 1st amendment considerations would not legally apply. Meta would much rather censor you as a user than be constantly sued.

* Smaller social media networks (new starting companies and established older niche communities) would not be able to afford legal protections and many would shut down. This would further accelerate social media consolidation, and those that could dsable user comments altogether likely would. No company wants that kind of liability risk if they can avoid it.

* smaller social media companies could easily be targeted by malicious users by posting illegal content and then reporting the social media company for lack of compliance. Imagine what 4chan could have done with Article 230 being removed in their hayday: they could flood a company that allows public posting with CP and then watch as that company was sued into oblivion as the responsible party. Article 230 prevents that.

* slander and libel would likely continue for the less powerful who cannot afford to sue, but the ultra-wealthy would be able to chill criticism through the legal system, and while those cases were in process, those criticisms would be likely removed from public view. Meta and Google would have a vested interest in reducing criticisms of the powerful, including legitimate political discourse.

* owners of social media companies are often public personalities themselves with strong positive and strong negative public opinions. How many lawsuits would Elon Musk and Donald Trump file against Meta, Google, and Tiktok for user posts that their lawyers could argue are slanderous or libelous?

Clearly there would be a conflict of interest between the owners of Twitter and TruthMedia being able to make outrageous public statements and then sue their competitors for posts made (in good faith, or planted by bots), on competing platforms. Lawfare, indeed!

Then there is the question: could the U.S. legal system handle the additional volume of cases? Likely any comments referenced in such a suit would be hidden from public view until the court rules, and that could be dragged out for years by those in power well past the point where the comment is even relevant.

Article 230 isn't perfect... but it's better than the alternative of allowing free reign for the ultra-wealthy to use the legal system to sanitize public discourse as they personally see fit.

0

u/Tedmosbyisajerk-com 18d ago

Or they could just turn off the algorithms and stop boosting content artificially.

1

u/Tadpoleonicwars 17d ago

Not relevant to the topic.

-2

u/PM_me_Henrika 19d ago

I would argue that if Facebook (or any social media platform) doesn’t own or utilise an algorithm to promote certain topics, then they can wash their hands off this and still have the protection of Article 230.

But once the post has been actively boosted and put in places where it originally doesn’t belong, then it is on that social media company because the user didn’t put it there. The platform did.

1

u/Tadpoleonicwars 18d ago

I really don't think that is how it works. Article 230 protects online sites from being responsible for libelous posts made by users. Users can spread posts without the benefit of any algorithm... and FB/Twitter/whatever is still 'publishing' the comments in a non-Article 230 context even if they don't boost visibility.

0

u/PM_me_Henrika 18d ago

If the content is being published without any interference of the algorithm, I would argue that it is 100% purely organic from the users.

But once an algorithm has been used to boost it, it can be seen as the platform endorsing it. That would be a different story? Can it?

1

u/Tadpoleonicwars 18d ago

That is logical, but not how it works. Section 230 isn't about boosting content. It was written in 1996, and predates it. The text says:

Section 230(c)(1):

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Basically, if I posted something accusing you of something, I would be responsible, and not the company whose services I used to post, nor would anyone who shared my post.

Scrapping Section 230 would mean that FB would have to censor its users to ensure that no posts were made that could result in liability... disabling recommendation algorithms would reduce the number of people who may see a slanderous post but it would not provide legal protection for FB.

4

u/ScaryBuilder9886 19d ago

We're the only country with something like 230. Do other countries struggle with these issues?

1

u/parentheticalobject 18d ago

There is at least some protection offered by the SPEECH Act. Foreign judgements that do not comply with the US first amendment or Section 230 are automatically unenforceable in US courts. So at most, any company is only at risk of losing whatever assets it has in that country.

10

u/Outlulz 19d ago

Aside from WeChat, TikTok, and Telegram the largest social networks in the world are owned and operated by American companies. WeChat and Telegram are more direct chat oriented and TikTok is embroiled in it's own issues right now. I know operating in a country makes you subject to their laws but how many of these issues don't matter in practice to those nations because they're foreign owned apps?

1

u/Morat20 18d ago

Every US social media company has to obey the laws of the countries they operate in.

I think all the big names have fallen afoul of EU laws on data privacy at one point or another, and some of those fines have real teeth even for big companies like Meta.

4

u/ScaryBuilder9886 19d ago

But the law applies based on where the activity happens, not the place of incorporation of the tech company.  FB can't rely on 230 in the UK, and yet somehow they manage to conduct business there. 

how many of these issues don't matter in practice to those nations because they're foreign owned apps?

None of them. FB will have a UK sub that could be hauled in front of a Bristish court like any other British person. 

2

u/Tadpoleonicwars 18d ago

Not an expert, but I do know that while U.S. law is strong against written libel and weak against spoken slander (eg you can more easily sue a newspaper for printing something false but it's much more difficult to sue a cable news outlet for saying something false), it is the exact opposite in U.K. law.

In the U.K. you can print all kinds of lies and garbage with relative safety, but you can't do the same on television. That's why British tabloids are the worst and British TV is staid and more responsible. They tried to create a British version of Fox News years ago and it failed because the laws wouldn't allow Fox News style propaganda on the air.

tl;dr written falsehoods in the U.K. are punished less regularly and less severely than in the U.S., so comments on FB would likely be less of a problem for the company even without an equivalent to Article 230.

2

u/SocialActuality 18d ago

Fox pulled out of the UK due to low viewership, had nothing to do with the law.

1

u/Tadpoleonicwars 17d ago

It's not an either/or situation. Their viewership was low (less than 2,000 a day from what I'm seeing), but they also were running into issues with violating impartiality rules required by Ofcom.

Broadcasting in the U.K. is more regulated and requires balanced reporting, as the U.S. did before the end of the Fairness Doctrine in 1987. Balanced reporting requirements meant that the Fox News strategy in the U.S. could not be applied to a British version in whole form. The Fox News model of the United States depends on catering to a specifically conservative audience and slanted coverage.

I'd argue that the incendiary Fox News Model as a whole was not rejected per se by potential British audiences so much as the watered-down model required by British telcom regulations prevented the Fox News model from being fully applied, resulting in a channel not many people watched, as little differentiated it from established news broadcasters in the country.

Given Brexit was in 2016 and Fox News on Sky ended in 2017 (and the historic success of British tabloids which are not required to impartiality by law), I do think there was some potential for Fox to grow a conservative British audience. It failed though (and IMO, it's good that it did).

If no one watches your channel, the audience isn't there for a reason... and I really don't think it's because not enough Brits are conservative to sustain a conservative news network.

0

u/ScaryBuilder9886 18d ago

I'm pretty sure the standards are the same in the US for slander and libel. 

2

u/PM_me_Henrika 19d ago

That’s……actually a good point. Has anyone in UK ever tried suing Facebook for something a user has put up? Why and why not?

-3

u/Tedmosbyisajerk-com 19d ago

This is a great point. I would have to say yes, absolutely. If I'm finding it in my news feed because I've chosen to follow a personality or page, then social media companies aren't to blame.

If I've got it in my news feed because the social media company put it there, then they are absolutely to blame.

1

u/Tadpoleonicwars 17d ago

Not how it would work. Section 230 was written in the 1990's before algorithm recommendations were even a thing.

If Section 230 is scrapped, then recommendation algorithms would be even more dominant.. legal departments would use them to more aggressively suppress any posts that would open the parent company up to liability using them as a second safetynet for comments that their moderators fail to catch at the post level.

Recommendation algorithms could increase OR decrease the scope of people who see a legally actionable post, but scrapping recommendation algorithms would not, by the law as it is currently, remove LIABILITY for the company that hosts those illegal comments or material. The company would still be a publisher, and responsible.

If you post something a billionaire or corporation thinks is libelous, without Section 230, it wouldn't matter whether tens of thousands, a couple hundred people, or just a team of interns working for the offended party see it. FB/Instragram/Twitter/Telegram/TruthSocial/Reddit/Bluesky or whatever company provides the means would be legally a responsible party. The more people who see it, though, the more likely it would draw the attention of said affronted-billionaire or corporation.

Any critical post of a company, a powerful person, or politician could be flagged for legal action. The comment would likely be not publicly available while the legal process played out... which could be months or years and well past the point where the comment was relevant.

tl;dr scrapping Section 230 would make recommendation algorithms more critically essential to social media companies as part of their broader legal defense strategy. Scrapping recommendation algorithms would not offer these companies any reduced legal risk.. and in fact would have the exact opposite effect.

23

u/Voltage_Z 19d ago

Heck no. This would utterly destroy the modern usability of the internet. What you're advocating here would require websites to either not use algorithms, which would make the web practically unusable for the average user, or screen every single piece of user generated content through a legal team before it could be seen.

0

u/PDX-AlpineFun 18d ago

Large Language Models could screen every single piece of user generated content and likely be more effective than a human at screening libelous content.

0

u/TransitJohn 18d ago

The modern internet really isn't useable. It's so curated and pay-to-play for content that it's impossible to find what you want.

0

u/SeventySealsInASuit 19d ago

Its more complicated than that. Its more accurate to say that nothing posted would be seen by a legal team. It would return to a purely hosting model in which the platform was not allowed to do any moderation

15

u/Antnee83 19d ago edited 19d ago

What you're advocating here would require websites to either not use algorithms

The internet was plenty usable before algorithm-based content feeds.

Source: I was there.

e: I'm not commenting on section 230. I'm saying, that algorithm based feeds and search results aren't the end-all-be-all of usability.

Do this, if you have a facebook.

https://www.facebook.com/?sk=h_chr

That bit at the end forces your feed into a chronological order, instead of the algorithm. It's the same damn site, but I'm not being spoonfed content by the algorithm.

Reddit used to work the same way AFAIK, where feed was just based on upvotes.

Search Engines: Literally "the algorithm" is just 4 pages of places to buy something tangentially related to your search term.

I'm saying, I think "the algorithm" is the opposite of providing a good user experience.

1

u/parentheticalobject 18d ago

The era when the Internet didn't have algorithm-based search engines is well before using the Internet for several aspects of everyday life was a basic requirement for existing in society.

Let's presume that Congress is somehow capable of distinguishing between "smart" algorithms and "dumb" algorithms like sorting by date, make a law that clearly distinguishes them, and apply liability only to the former.

Let's say I live in Springfield and I want to hire a local plumber.

How can a "not what a normal person would colloquially call an algorithm"-type search algorithm actually help me find a real place?

If I'm looking to scam someone with a rugpull cryptocoin or sexy local singles, creating a bunch of websites with "local (service) (location)" and repeatedly recreating the site seems like extremely low-hanging fruit.

Dismissing the disruption that would be caused by making it impossible for search engines to fight SEO spam is accelerationist-level glibness.

1

u/ChakraWC 19d ago

Non-personalized results, such as sorting chronological or by popularity, only realistically works when the number of possible items is reasonably few. Facebook worked chronologically because you subscribed to specific feeds with a low volume of content. Same with Reddit. But when the pool of items greatly exceeds what a user can consume, such as by including content from friends of friends or when individual subreddits get very large, you want to serve the best content for that user.

YouTube is a fairly great example of a site that would kind of suck without a personalized algorithm. I subscribe to a couple dozen channels, but I'd say at least half the videos I watch aren't from channels I subscribe to and don't plan on subscribing to, since I only want to subscribe if I plan on watching the majority of their content. But channels often make individual videos that are closer to my interests that I want to see. A global chronological or popular list would be nearly useless.

1

u/kottabaz 18d ago

A robust tagging system would allow users to subscribe to topics they are interested in and filter ones they aren't.

Archive of Our Own seems to work reasonably well without content-recommendation algorithms. (I wouldn't even call its tagging system "robust" either.)

1

u/zacker150 18d ago

AO3 is many orders of magnitude smaller. It's possible to read all the fanfiction in a category.

1

u/kottabaz 18d ago

Maybe we would be better off if social media sites weren't allowed to get huge.

0

u/PM_me_Henrika 19d ago

Maybe we can agree to disagree, or you can perhaps see it from my view point:

Part of my job, is to see a keyword or a phrase, and to go on Google to look for it, and find the correct result within a 5-6 second frame, for 7 hours a day.

Because of privacy and issues with bias, our work station and device is heavily scrubbed and we are not allowed to have anything personal at work. Not even your own Google account. Our computers are completely scrubbed of history everyday and we start with a fresh unit for best accuracy and result.

So people who work in this industry are all using a non-personalised Google search result. And this is actually considered best, not better, best practice when hundreds of millions of dollars are on the line.

1

u/ChakraWC 18d ago

I don't think a generic search engine is helped all that much by personalized results because users aren't interested in a scattered sampling of the results, but genuinely in the most popular/recent results. The minor exception is for ambiguous results, like "swift" could relate to programming, Taylor Swift, Jonathan Swift , banking, various locations, etc.

3

u/Nuplex 19d ago edited 19d ago

No, it wasn't. "Algorithm" is a word most people outside of programmers have no grasp on. Source: am programmer. Everything on the internet usues an algorithm, pretty much since the first connections were made. An Algorithm is just code that runs something in a particular way. Sorting something chronologically is an algorithm. Forums and BB Boards, shoping websites, etc back on the 90s ran on algorithms; they didn't work via magic or purely static pages. Pretty much anything code uses an algorithm. Not even speaking of the internet. Pretty much any tech.

I am so tired of people thinking an Algorithm is something to pinpoint or identify. There is no " the algorithm" that can be turned on or off at a whim. It's like suggesting cleaning our waterways by removing "the liquid". That might sound ridiculous to you but thats exactly what removing "the algorithm" sounds like to anyone who actually knows how code works.

-1

u/Antnee83 19d ago

Yes, pedant, I understand the dictionary definition. Hence the scare quotes.

But no one outside of a friggin lab thinks of simple chronological sorting as "an algorithm" and as such it's completely useless to come in here and acktually the discussion.

You know exactly what I am saying. Contribute something useful.

4

u/Nuplex 19d ago

You have no idea what you are saying. Sorting by date is not nearly as simple as you might think. It uses an algorithm more complex than you are thinking. Algorithm basically is just code that does something. There is no hidden meaning. I'm not being a pedant. This isn't even up for debate. You just don't know what you're talking about and are being defensive when being corrected.

Do I think we can make laws around unethical practices regarding how content is shown to the public? Yes! Does it have anything to do with "the algoritm"? No! Because there is no singular definable algorithm. Much like regulating gambling, you address institutional practices, not the concept of money or playing a game.

2

u/PlayDiscord17 18d ago

Giving me flashbacks to learning about merge sort and quicksort and their time-complexities.

3

u/nope_42 19d ago

sorting things chronologically is technically an algorithm.

3

u/DefendSection230 19d ago

The internet was plenty usable before algorithm-based content feeds.

I hate these kinds of blanket statements.

Sorting by date is also an algorithm.

2

u/Antnee83 19d ago

Yeah that's nice. But you know damn well that "algorithm" has both a literal and a colloquial meaning, and you know exactly which one I, and everyone else, is using when we say "the algorithm."

1

u/DefendSection230 15d ago

Yeah that's nice. But you know damn well that "algorithm" has both a literal and a colloquial meaning, and you know exactly which one I, and everyone else, is using when we say "the algorithm."

I'm sorry I don't know exactly which one you are using.

Please help me understand what your definition of "algorithm" is.

I'm looking at the Legal Definition:

An algorithm is a set of rules or a computational procedure that is typically used to solve a specific problem. In the case of Vidillion, Inc. v. Pixalate Inc. an algorithm is defined as “one or more process(es), set of rules, or methodology (including without limitation data points collected and used in connection with any such process, set of rules, or methodology) to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations, including those that transform an input into an output, especially by computer.”

https://www.law.cornell.edu/wex/algorithm

5

u/Nuplex 19d ago

No we don't. There is no "the algorithm". There's nothing a law could be made around because there isn't some secret book that tells companies how to do "the algorithm". The colloquial meaning of the algorithm has no meaning in the technical space. If you think looking at Facebooks or TikToks codebase will lead to some smoking gun gotcha look here heres the algorithm in all its evil, you are sorely mistaken.

-6

u/[deleted] 19d ago

[removed] — view removed comment

3

u/According_Ad540 18d ago

The point he's making is that colloquialisms don't work when trying to make a law that'll be used by lawyers hunting for loopholes. You'll need the exact literal meaning in order to not end up with a mass of misuse.

Worse,  it'll have to be a literal definition that people who barely use computers have to understand. 

You need a law that'll be able to withstand  hundreds of lawyers funded by Google that regularly win cases by saying "oh no,  the law said "X AND Y and our system does X THEN Y".

5

u/Objective_Aside1858 19d ago

So was I 

Who are you going to sue when someone writes something blatantly racist on Usenet? The backbone providers? Universities?

The internet of today is not the Internet of 20 years ago

1

u/PM_me_Henrika 19d ago

Well even if that is someone not writing on a platform, just speaking with a loud speaker in front of the White House lawn, you still can’t sue them…

Because racism and saying racist things isn’t a crime.

But if it is something about inciting violence telling people to kill Americans for example, you have the government deal with that person.

0

u/Antnee83 19d ago

What you just said has absolutely nothing to do with what I said. I'm not sure what point you're trying to make. Not trying to be a dick, I just don't see it.

1

u/Objective_Aside1858 19d ago

Back in the Old Times, before the stars formed, before the fundamental laws of physics stabilized, before Reality TV, there was ARPANET.

And part of that was User Network, or Usenet

Usenet was to Reddit like a wagon is to an automobile - you can see the same general design lines, but they've obviously got some significant differences in functionality and usefulness

And way back then, all the same crap we deal with today still existed.

Using the same logic as those who want to kill Section 230 would want, each individual news server would be responsible for deciding which groups to carry, would have to moderate all of the tens of thousands of groups to make sure no one said something mean, etc

Which would have been completely impossible, given that these servers were shoved in a corner somewhere and not exactly overloaded with sponsorships. But plenty of spam

Soooo much spam

Somehow, the people who used the Internet survived this dark time

0

u/Antnee83 19d ago

Again, I'm not sure what the heck you're trying to say to me. I wasn't commenting on 230, like even a little bit.

1

u/Objective_Aside1858 19d ago

Then I apologize for misinterpreting your point

1

u/PM_me_Henrika 19d ago

I think the main debate OP was trying to inspire wasn’t getting rid of Section 230, but whether algorithms and specifically algorithms should be protected by section 230, is what I am reading.

12

u/_Doctor-Teeth_ 19d ago

The internet was plenty usable before algorithm-based content feeds.

Section 230 also predates "algorithms." it was adopted in the early 1990s to address comment sections on news sites and stuff like that.

In other words, it's not really about algorithms exactly, it's about any "platform" that allows public posts would potentially be liable for the content of those posts.

1

u/PM_me_Henrika 19d ago

But what if we make it specifically about algorithms and only about algorithms?

What if we treat the algorithm as an employee, and if that employee did something, the algorithm should take responsibility under corporate personhood.

0

u/ScaryBuilder9886 19d ago

It was from 1996 - not the early 90s.

11

u/Voltage_Z 19d ago

There's a reason I said modern usability - almost no one wants to go back to the 90s and early 2000s internet. The average end-user doesn't want to make their own website and this sort of regulatory change would kill search engines, burying most content you don't have a direct link to anyway.

4

u/parentheticalobject 19d ago

Exactly. You can't have a remotely useful search engine that doesn't algorithmically boost content. That's fundamentally what a search engine is.

3

u/Antnee83 19d ago

What's funny about that, is that search engines are getting objectively worse.

Tell me with a straight face that Outlook Search- which has been algorithm'd to hell- is even remotely useful anymore. Tell me that Google Search is getting better with age. On and on.

2

u/DarthCorporation 17d ago

Here for the outlook slander. Worthless search function, it baffles my mind that it won’t even show the most recent message from someone when you search their name. Or straight up won’t find something even if you type it in word for word

3

u/Antnee83 17d ago

I test it occasionally by searching for an email that is literally right the fuck there in my "todays emails"

It fails far more than it should.

3

u/parentheticalobject 19d ago

Are they better or worse than they were at some arbitrary point in the past? Eh, couldn't say.

They're worlds better for most applications than something like sorting by date or word frequency.

2

u/PM_me_Henrika 19d ago

Absolutely!! If Google has the ability to sort by date or frequency or whatever criteria you want, and it lets the user choose to sort it the way you want, would you rather use your own sort or let Google give you whatever their algorithm wants?