News:

And we're back!

Main Menu

The Internet

Started by Jacob, November 29, 2023, 01:17:24 PM

Previous topic - Next topic

Jacob

QuoteBad Bots Account for 73% of Internet Traffic: Analysis
The top five categories of Bad Bot attacks are fake account creation, account takeovers, scraping, account management, and in-product abuse.

Arkose Labs has analyzed and reported on tens of billions of bot attacks from January through September 2023, collected via the Arkose Labs Global Intelligence Network.

Bots are automated processes acting out over the internet. Some perform useful purposes, such as indexing the internet; but the majority are Bad Bots designed for malicious ends. Bad Bots are increasing dramatically — Arkose estimates that 73% of all internet traffic currently (Q3, 2023) comprises Bad Bots and related fraud farm traffic.

The top five categories of Bad Bot attacks are fake account creation, account takeovers, scraping, account management, and in-product abuse. These haven't changed from Q2, other than in-product abuse replacing card testing. The biggest increases in attacks from Q2 to Q3 are SMS toll fraud (up 2,141%), account management (up 160%), and fake account creation (up 23%).

The top five targeted industries are technology (Bad Bots comprise 76% of its internet traffic); gaming (29% of traffic); social media (46%), e-commerce (65%), and financial services (45%). If a bot fails in its purpose, there is a growing tendency for the criminals to switch to human operated fraud farms. Arkose estimates there were more than 3 billion fraud farm attacks in H1 2023. These fraud farms appear to be located primarily in Brazil, India, Russia, Vietnam, and the Philippines.

The growth in the prevalence of Bad Bots is likely to increase for two reasons: the arrival and general availability of artificial intelligence (primarily gen-AI), and the increasing business professionalism of the criminal underworld with new crime-as-a-service (CaaS) offerings.

From Q1 to Q2, intelligent bot traffic nearly quadrupled. "Intelligent [bots] employ sophisticated techniques like machine learning and AI to mimic human behavior and evade detection," notes the report (PDF). "This makes them skilled at adaptation as they target vulnerabilities in IoT devices, cloud services, and other emerging technologies." They are widely used, for example, to circumvent 2FA defense against phishing.

Separately, the rise of artificial intelligence may or may not relate to a dramatic rise in 'scraping' bots that gather data and images from websites. From Q1 to Q2, scraping increased by 432%. Scraping social media accounts can gather the type of personal data that can be used by gen-AI to mass produce compelling phishing attacks. Other bots could then be used to deliver account takeover emails, romance scams, and so on. Scraping also targets the travel and hospitality sectors.

Scraping, it must be said, is a legally murky area. It is not specifically illegal; but if it defies a website's published terms of use, it is certainly immoral. There are services that openly offer web scraping facilities. In this case, it demonstrates the relationship between CaaS, AI, and bots (here primarily scraping).

"This is a website you can use to make sure your bots aren't getting prevented by a website," Kevin Gosschalk, founder and CEO of Arkose Labs, told SecurityWeek, referring to a specific provider that will not mention. "You can purchase this software. It has enterprise support and so on. But it is purpose built to commit crime. That is what it does. And there are many other different websites like this, but they look like legitimate businesses. It is a good example of a product purpose built to commit fraud."

It is also a good example of crime-as-a-service. Crime-as-a-service enables wannabe criminals who may have the intent but not the skills to engage in cybercrime. "The massive rise of CaaS has completely changed the economics for adversaries" continued Gosschalk. "It's much cheaper to attack companies and the attacks are just better because it's a dev shop that is doing the attacks instead of just individual cybercriminals."

The continuing increase in the volume of Bad Bots suggests they remain profitable for the criminals. The arrival of gen-AI will improve the performance of Bad Bots, while the growth of CaaS will increase the number of Bad Bot operators; so, it will get worse. The only solution is Bad Bot detection and mitigation to limit the access of the bots to their human or system targets. If it is not profitable, they won't do it.

https://www.securityweek.com/bad-bots-account-for-73-of-internet-traffic-analysis/

Given the cost of executing internet attacks (low), the potential rewards (significant), and the risk of adverse consequences (low) it seems likely that internet scams and abuse are going to continue to rise.

I wonder if there's potential for garbage internet traffic to 1) significantly alter how we interact with the internet, and/ or 2) put enough pressure on internet resources that its efficacy is curtailed.

Sheilbh

Also relevant to that - great thread on an SEO heist:
https://twitter.com/jakezward/status/1728032634037567509

And within this as well as the bad clicks there's the long tail of the internet which is basically bad content. The huge trend is towards made for advertising content which are incredibly dodgy - it's those weird links you see at the bottom of pages. That's increasingly where ad spending is going too (except for the walled gardens of the platforms).

We've designed the internet so that the open web is basically most effective for people with no interest or incentive in caring about quality or accuracy. It's grim :(

Edit: Was going to post in the AI thread but I think it's more suitable here.
Let's bomb Russia!

Jacob

Yeah.

You also posted this in the AI thread:
Quote from: Sheilbh on November 29, 2023, 01:30:51 PMAgain very specific to journalism - but incredible story:
https://futurism.com/sports-illustrated-ai-generated-writers

AI journalists writing AI content, which is garbage, but includes topics such as personal finance ("your financial status translates to your value in society") with AI bylines and bios for their "journalists".

As the article ends:
QuoteWe caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism; in the ensuing storm of criticism, CNET issued corrections to more than half its AI-generated articles. G/O Media also published AI-generated material on its portfolio of sites, resulting in embarrassing bungles at Gizmodo and The A.V. Club. We caught BuzzFeed publishing slapdash AI-generated travel guides. And USA Today and other Gannett newspapers were busted publishing hilariously garbled AI-generated sports roundups that one of the company's own sports journalists described as "embarrassing," saying they "shouldn't ever" have been published.

If any media organization finds a way to engage with generative AI in a way that isn't either woefully ill-advised or actively unethical, we're all ears. In the meantime, forgive us if we don't hold our breath.

... but I think it's potentially relevant for the future internet as a whole.

With the enshittification of places like google and the closed platforms, with the proliferation of scam bait and attacks, and the proliferation of useless garbage content drowning out genuine and useful content - what is the future of the internet?

If there's a ratio of shit you have to wade through to get useful content, presumably it's possible for that ratio to get so bad that people disengage because it's not worth it. It certainly seems to me that that ratio is getting worse and unlikely to get better.

The Brain

People touching grass may not be all bad.
Women want me. Men want to be with me.

Sheilbh

Yeah - also relevant is this challenge to Meta's proposal in Europe to shift to a "consent" to personalised advertising (and basically tracking everywhere) or pay:
https://noyb.eu/en/noyb-files-gdpr-complaint-against-meta-over-pay-or-okay

Which I think makes the very good point that we're at risk of developing an internet experience which is based on your willingness and ability to pay. Especially when you consider the enshitification of Google or Amazon (which astounds me as a product in terms of how much worse it's good).

You could even imagine how it would look like - Apple device which is better for privacy, Google and Amazon with relevant results and the "promoted" guff knocked out, smooth Meta products without the tracking and personalised advertising (although - weirdly - everyone LOVES Instagram advertising, I assume because it's 99% lifestyle not raging into the void) and other decent quality apps (policed by Apple and Google's app store rules). Add into that the increasing drive of media companies to go behind paywalls - and in Germany or France you often get a similar consent or pay model.

But it's shit for the open web (and, I'd argue, our societies).
Let's bomb Russia!

Iormlund

I hope they do that. Not sure that many people here would pay for Facebook ...

Josquius

#6
Given Facebook are tracking you even if you don't have an account there, I cant help but get vibes this is a bit of a shakedown.
Pay up or your data gets it.


Also, not mentioned here but something I increasingly hear about at industry conferences and worth considering.... The environmental impact. The broader societal concerns about AI are dominating the congregation but the energy use is a factor that crops up a lot too.
 3/4 of traffic is trash....that's a lot of energy wasted there.

On the bright side there is a rise in the concept of ethically trained AI. Adobe is really going down this niche to distinguish itself from the scandals of the upstarts. Scraping in itself isn't illegal but scraping copyrighted stuff.... Lots happening there.
██████
██████
██████