Sentrya logo Sentrya Get rid of spam

Sniper DZ Phishing as a Service

Added on: 01/10/2024 Cybercriminals are increasingly using a service called Sniper Dz to launch phishing attacks, which trick people into giving away personal information like usernames and passwords. Sniper Dz offers free phishing tools (Phishing as a Service or PaaS), making it easy for anyone - even those without strong technical skills - to set up fake websites that look like real ones, such as Facebook, PayPal, and Netflix.

Phishing is a type of cyberattack where criminals create fake login pages to steal personal information when people try to log in. Once attackers collect the details, they can use them to hack into real accounts and commit further crimes. Sniper Dz operates mainly through Telegram channels, offering easy access to phishing templates that users can customise and deploy in their attacks.

One reason this platform is so concerning is that it allows almost anyone to become a cybercriminal. Since the phishing kits are pre-built, attackers don’t need to know how to code or build their own websites. They simply copy the provided templates and trick unsuspecting people into handing over sensitive information. This has led to over 140,000 attacks, affecting users all around the world.

The creators of Sniper Dz not only give away free phishing kits but also sell premium versions for more advanced attacks. They regularly update their phishing templates to stay ahead of security measures, making it harder for companies to protect their users.

The growing use of these tools highlights the importance of staying vigilant online. People should always double-check website URLs before entering login information, especially if they arrive at a site through a suspicious link. Companies can also help by educating their users and using stronger security measures like two-factor authentication to make it harder for hackers to gain access to accounts.

Read more

LG Smart TVs Now Use Emotionally Intelligent Ads with Zenapse AI Technology

In a bold move shaping the future of connected TV advertising, LG Electronics has partnered with artificial intelligence company Zenapse to introduce emotionally intelligent advertising to its smart TVs. This AI-driven innovation uses advanced emotional analytics to deliver personalised ads based on viewers’ psychological and emotional profiles.<br/><br/><br/><h2 class= "text-heading">What Is Emotionally Intelligent Advertising?</h2><br/>Emotionally intelligent advertising is the next evolution in personalised marketing. Rather than just targeting users based on demographics, browsing behaviour, or viewing history, this method leverages emotion-based data to tailor content more precisely.<br/><br/>At the center of this technology is Zenapse’s <em>Large Emotion Model (LEM)</em>, a proprietary AI system that maps out psychological patterns and emotional states across various audiences. When integrated into <em>LG’s Smart TV platform</em>, this model works in tandem with the TVs’ first-party viewership data to identify how users feel while watching content—and delivers ads that resonate on a deeper level.<br/><br/><br/><h2 class= "text-heading">How LG’s Smart TV AI Works with Zenapse</h2><br/>LG’s smart TVs already employ <em>Automatic Content Recognition (ACR)</em>, a tool that gathers data about the content viewers consume, including shows and apps accessed through external devices. This gives LG valuable insight into a household’s viewing preferences.<br/><br/>By combining ACR data with Zenapse’s emotion-detection AI, advertisers can now deliver highly relevant, emotionally-tuned ad experiences that reflect the viewer’s mindset. For example:<br/>• A user showing patterns of stress may see wellness or mindfulness ads.<br/>• A family engaging in uplifting content might receive vacation or family-focused brand messages.<br/><br/>This is far beyond traditional <u>contextual advertising</u>—it’s what experts are calling emotionally-aware targeting.<br/><br/><br/><h2 class= "text-heading">Data Privacy and Ethical Considerations</h2><br/>As with all AI-powered personalisation, <b>privacy</b> is a major concern. LG’s smart TVs collect data through ACR, and while users can opt out, this type of emotionally aware targeting requires even more <em>granular behavioural data</em>.<br/><br/>Consumer advocacy groups warn that technologies which infer mental or emotional states could cross ethical boundaries if not regulated properly. Transparency, consent, and data control will be key for LG and Zenapse to maintain user trust.<br/><br/><u>LG has stated</u> that all data used is anonymised and consent-based, but the introduction of emotion-based ads will likely renew calls for updated <em>privacy legislation</em> in the smart home and streaming ecosystem.<br/><br/><br/><h2 class= "text-heading">What’s Next for Smart TV Advertising?</h2><br/>This partnership signals a major shift in how ads are delivered on smart TVs. With emotionally intelligent AI models now in play, we can expect:<br/>• More platforms to adopt emotion-based personalisation<br/>• Expanded use of machine learning for real-time emotional detection<br/>• Regulatory scrutiny over AI and mental-state inference<br/><br/>For now, LG and Zenapse are pioneering a new frontier in <em>AI-driven, emotion-aware media experiences</em>—one that could redefine the relationship between brands and consumers in the living room. Read more

How Data Brokers and AI Shape Digital Privacy: The Role of Publicis and CoreAI

In the digital age, vast amounts of personal data are being collected, analysed, and sold by data brokers—companies that specialise in aggregating consumer information. These entities compile data from various sources, creating highly detailed profiles that are then sold to advertisers, businesses, and even political organisations.<br/><br/>One of the key players in this evolving landscape is <em>Publicis Groupe</em>, a global advertising and marketing leader, which has developed <em>CoreAI</em>, an advanced artificial intelligence system designed to optimise data-driven marketing strategies. This article explores how data brokers operate, the privacy concerns they raise, and how AI-powered marketing technologies like CoreAI are transforming digital advertising.<br/><br/><br/><h2 class= "text-heading">What Are Data Brokers?</h2><br/><b>How They Operate</b><br/><br/>Data brokers collect and process personal data from a variety of sources, including:<br/>• <u>Public Records</u>: Government databases, voter registration files, and real estate transactions.<br/>• <u>Online Behaviour</u>: Website visits, search history, and social media activity.<br/>• <u>Retail Purchases</u>: Credit card transactions and loyalty program memberships.<br/>• <u>Mobile Data</u>: Location tracking from smartphone apps.<br/><br/>This information is aggregated into comprehensive consumer profiles that categorise individuals based on demographics, behaviour, interests, and financial status. These profiles are then sold to companies for targeted advertising, risk assessment, and even hiring decisions.<br/><br/><b>Privacy Concerns</b><br/><br/>The mass collection and sale of personal data raise significant privacy issues, including:<br/>• <u>Lack of Transparency</u>: Most consumers are unaware that their data is being collected and sold.<br/>• <u>Potential for Misuse</u>: Personal information can be exploited for identity theft, scams, or discriminatory practices.<br/>• <u>Limited Regulation</u>: Many countries lack strict laws governing the data brokerage industry, allowing companies to operate with minimal oversight.<br/><br/>In response to these concerns, regulatory bodies such as the <em>Consumer Financial Protection Bureau (CFPB)</em> are considering restrictions on data brokers, including banning the sale of Social Security numbers without explicit consent.<br/><br/><br/><h2 class= "text-heading">Publicis Groupe: A Major Player in AI-Driven Marketing</h2><br/><b>What is Publicis?</b><br/><br/>Publicis Groupe is a global marketing and communications firm offering advertising, media planning, public relations, and consulting services. The company operates in over 100 countries and works with major brands across industries, leveraging advanced data analytics to enhance marketing campaigns.<br/><br/><b>Introduction of CoreAI</b><br/><br/>To further solidify its position as a leader in AI-driven marketing, Publicis introduced CoreAI in January 2024. CoreAI is an intelligent system designed to analyse and optimise vast datasets, including:<br/>• <em>2.3 billion consumer profiles</em><br/>• <em>Trillions of data points on consumer behaviour</em><br/><br/>This AI-powered tool integrates <u>machine learning and predictive analytics</u> to help businesses make data-driven marketing decisions, improve targeting accuracy, and enhance customer engagement.<br/><br/><b>How CoreAI Uses Data</b><br/><br/>CoreAI uses AI-driven insights to:<br/>• <u>Enhance media planning</u>: Optimising ad placements and improving ROI.<br/>• <u>Personalise advertising</u>: Delivering hyper-targeted ads based on individual behaviour.<br/>• <u>Improve operational efficiency</u>: Automating marketing tasks, reducing costs, and streamlining campaigns.<br/><br/>Publicis has committed <em>€300 million over the next three years</em> to further develop its AI capabilities, reinforcing its goal of leading the AI-driven transformation of digital marketing.<br/><br/><br/><h2 class= "text-heading">The Intersection of Data Brokers and AI in Advertising</h2><br/>The combination of <em>data brokers and AI-powered marketing platforms like CoreAI</em> is reshaping how businesses interact with consumers. By leveraging massive datasets and machine learning, companies can:<br/>• <u>Predict consumer behaviour</u> with greater accuracy.<br/>• <u>Refine targeted advertising</u> to reach the right audience at the right time.<br/>• <u>Enhance customer experiences</u> through personalised content.<br/><br/>However, this technological evolution also raises <em>ethical and privacy concerns</em> regarding consumer data rights, AI bias, and the potential misuse of personal information.<br/><br/><br/><h2 class= "text-heading">How Consumers Can Protect Their Data</h2><br/>Individuals concerned about data privacy can take several steps to protect their information:<br/>1. <u>Opt-out of data collection</u>: Many data brokers offer opt-out options, though the process can be tedious.<br/>2. <u>Use privacy-focused services</u>: Platforms like <a href= "https://sentrya.net" class= "content-link">Sentrya</a> help remove personal data from public databases.<br/>3. <u>Limit data sharing</u>: Adjust privacy settings on social media, browsers, and mobile apps.<br/>4. <u>Stay informed</u>: Keep track of legislation and regulations surrounding data privacy.<br/><br/><br/>The growing influence of <em>data brokers and AI-driven marketing technologies</em> is transforming the digital landscape. Companies like <em>Publicis Groupe</em> are pioneering AI solutions like <em>CoreAI</em>, offering advanced data-driven insights while raising concerns about consumer privacy. As regulations evolve, businesses and consumers alike must navigate the fine line between innovation and ethical data use. Read more

Amazon Will Save All Your Conversations with Echo

Starting 28th March, 2025, Amazon will discontinue the “Do Not Send Voice Recordings” feature on select Echo devices, resulting in all voice interactions being processed in the cloud. This change aligns with the introduction of Alexa Plus, Amazon’s enhanced voice assistant powered by generative AI.<br/><br/><br/><h2 class= "text-heading">Background on the “Do Not Send Voice Recordings” Feature</h2><br/>Previously, Amazon offered a feature allowing certain Echo devices to process voice commands locally, without sending recordings to the cloud. This feature was limited to specific models—namely, the Echo Dot (4th Gen), Echo Show 10, and Echo Show 15—and was available only to U.S. users with devices set to English. Its primary purpose was to provide users with greater control over their privacy by keeping voice data confined to the device.<br/><br/><br/><h2 class= "text-heading">Transition to Cloud Processing</h2><br/>In an email to affected users, Amazon explained that the shift to cloud-only processing is necessary to support the advanced capabilities of Alexa Plus, which leverages generative AI technologies requiring substantial computational resources. The email stated:<br/><br/>“<em>As we continue to expand Alexa’s capabilities with generative AI features that rely on the processing power of Amazon’s secure cloud, we have decided to no longer support this feature.</em>”<br/><br/>Consequently, all voice interactions with Alexa will be transmitted to Amazon’s cloud servers for processing, enabling more sophisticated and personalised responses.<br/><br/><br/><h2 class= "text-heading">Privacy Controls and User Options</h2><br/>Despite this change, Amazon emphasises its commitment to user privacy. Users will retain the ability to manage their voice recordings through the following options:<br/>• <u>Automatic Deletion</u>: Users can configure settings to ensure that voice recordings are not saved after processing.<br/>• <u>Manual Deletion</u>: Users can review and delete specific voice recordings via the Alexa app or the Alexa Privacy Hub.<br/><br/>These measures allow users to maintain a degree of control over their data, even as cloud processing becomes standard.<br/><br/><br/><h2 class= "text-heading">Implications for Users</h2><br/>The move to mandatory cloud processing reflects Amazon’s strategy to enhance Alexa’s functionality through advanced AI capabilities. While this transition promises more dynamic interactions, it also raises concerns about data privacy and security. Users are encouraged to familiarise themselves with Alexa’s privacy settings to tailor their experience according to their comfort levels.<br/><br/><br/>As Amazon phases out local voice processing in favor of cloud-based AI enhancements, users must navigate the balance between embracing new technological advancements and managing their privacy preferences. Staying informed about these changes and proactively adjusting privacy settings will be crucial in this evolving landscape. Read more

Italy Data Protection Authority Blocks Chinese AI App DeepSeek Over Privacy Concerns

Italy’s Data Protection Authority, known as the Garante, has taken decisive action against the Chinese artificial intelligence application DeepSeek, citing significant concerns over user data privacy. The regulator has ordered an immediate block on the app’s operations within Italy and initiated a comprehensive investigation into its data handling practices.<br/><br/><br/><h2 class= "text-heading">Background on DeepSeek</h2><br/>Developed by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, DeepSeek is an AI-powered chatbot that has rapidly gained global popularity. Notably, it has surpassed U.S. competitor ChatGPT in downloads from Apple’s App Store, attracting attention from both users and regulatory bodies.<br/><br/><br/><h2 class= "text-heading">Regulatory Actions and Concerns</h2><br/>The Garante’s intervention was prompted by DeepSeek’s failure to provide adequate information regarding its data collection and processing methods. Specifically, the authority sought clarity on:<br/>• The types of personal data collected<br/>• The sources of this data<br/>• The purposes and legal basis for data processing<br/>• Whether user data is stored in China<br/><br/>DeepSeek’s responses were deemed “completely insufficient,” leading to the immediate suspension of the app’s data processing activities concerning Italian users. The Garante emphasised the potential risk to the data of millions of individuals in Italy as a primary concern driving this decision.<br/><br/><br/><h2 class= "text-heading">International Scrutiny</h2><br/>Italy is not alone in its apprehensions regarding DeepSeek’s data practices. Data protection authorities in France, Ireland, and South Korea have also initiated inquiries into the app’s handling of personal information. These investigations reflect a growing global vigilance over the privacy implications of rapidly advancing AI technologies.<br/><br/><br/><h2 class= "text-heading">Company’s Position and Market Impact</h2><br/>DeepSeek has asserted that it does not operate within Italy and is therefore not subject to European legislation. However, the Garante proceeded with its investigation due to the app’s significant global download rates and potential impact on Italian users. The emergence of DeepSeek’s new chatbot has intensified competition in the AI industry, challenging established American AI leaders with its lower costs and innovative approach.<br/><br/><br/>The actions taken by Italy’s Data Protection Authority underscore the critical importance of transparency and compliance in the handling of personal data by AI applications. As AI technologies continue to evolve and proliferate, regulatory bodies worldwide are increasingly vigilant in ensuring that user privacy is safeguarded. The ongoing investigations into DeepSeek will serve as a significant benchmark for the enforcement of data protection standards in the AI industry. Read more
Sentrya logo Sentrya
Affiliates Register Terms Privacy
Made with ❤️ by Claudiu All rights reserved | Sentrya 2025
I'd like to set analytics cookies that help me make improvements by measuring how you use the site.