No one likes passwords. They take ages to enter, are hard to remember, and the need for a number, symbol, uppercase letter, and a couple of hen’s teeth only makes creating them all the more difficult. But if you use the same password everywhere, or limit yourself to simple short (read — weak) passwords, sooner or later you’ll get hacked. How to combine ease of input, memorability, and hack resistance? An interesting, if unusual, way is to use emojis — yes, those same smileys 😁 and other cute icons 🔐 we love to use in chats and posts.

On today’s computers and smartphones, emojis are just as much full-fledged symbols as letters in alphabets and punctuation marks. That’s because they’re part of the Unicode standard (see here for a full list of standardized emojis), so in theory, they can be used in any text — including in passwords.

Why use emojis in passwords

Since there are a great many emojis in existence, your password can be twice as short.

When intruders try to brute-force a password containing letters, numbers, and punctuation marks, there are fewer than a hundred variations for each symbol they need to pick. But there are more than 3600 standardized emojis in Unicode, so adding one to your password forces hackers to go through around 3700 variants per symbol. So, in terms of complexity, a password made up of five different emoticons is equivalent to a regular password of nine characters’ while seven emojis is equivalent to a strong password of 13 “regular” characters.

Some new emojis in Unicode

Emojis are easier to memorize. Instead of a meaningless jumble of letters and numbers, you can compose a logical sentence and create an emoji puzzle based on it. For this you can use an emoji translator or a chatbot like ChatGPT.

An emoji translator or ChatGPT can create an emoji-based puzzle-password on a given topic

Hackers don’t brute-force emojis. Various hacking tools and dictionaries for cracking passwords include combinations of words, numbers, and common substitutions like E1iteP4$$w0rd, but not (yet?) emojis. So when an attacker goes through a leaked password database, your account protected with a 👁️🐝🍁👁️🥫🪰 (“I believe I can fly”) password is very likely safe.

All this sounds too good to be true. So what are the downsides of emoji passwords? Alas, they’re sizeable.

Why not use emojis in passwords?

Not all services accept emoji passwords.

We carried out a little account-creation experiment using a password consisting of several standard emojis. It was rejected by both Microsoft/Outlook and Google/Gmail. However, Dropbox and OpenAI happily accepted it, so basically it’s a matter of experimentation.

Not every service will accept an emoji password

You’ll have to test your emoji password immediately to make sure it works. Even if you’re able to create an account with it, it may not pass verification when signing in.

Emojis are harder to enter. On smartphones, entering emoji is simplicity itself. On desktop computers, however, it can be a bit more troublesome — though not excessively so (see below for details). In any case, you’ll have to find the emojis you need in a long list, making sure to select the right picture from several similar ones. If you cross-platform, remember to check you can enter these emojis on both your computer and smartphone for all services you use.

Recent emojis give you away. Many smartphone keyboards display frequently used emojis at the top of the list. This information is unlikely to help online hackers, but friends or family may be able to guess or snoop on your password.

Recent emoji can reveal a lot about you to prying eyes

How to create a password with emojis

A reasonable compromise would be to add an emoji or two to your password to up its complexity. The rest of the password can then be alphanumeric, and less fancy. Of course, using emojis is no substitute for traditional security tips: using long passwords, a password manager and two-factor authentication (2FA). Speaking of which, our password manager can both store passwords with emojis and generate 2FA codes.

Emoji password and 2FA code in Kaspersky Password Manager

How to enter emoji passwords

The input method depends on your device and operating system. Smartphones have a special keyboard section for this, while on computers you can use one of these options:

  • In Windows 10 or 11, press the Win key + period simultaneously to open the emoji table in any input field. In many layouts, the key combination Win + ; also works.
  • In macOS, the emoji table is available in any application under Edit → Emoji & Symbols. To open the table from the keyboard, hold down Command + Control + Spacebar together.
  • In Ubuntu Linux (version 18 and higher), you can enter emojis by right-clicking in the input field and selecting Insert Emoji from the context menu. To call up the table from the keyboard, just like in Windows, press Win + period at the same time.
  • Input by character code. Slow and boring as it may be, this is a reliable way to input any Unicode character — not just emojis. First, look up the code of the respective character in the table, then enter it using a special key combination. In Windows, press and hold Alt, then enter the decimal code from the list on the side numeric keypad. For other OSes the process is described in more detail here.
  • But the easiest way to enter emoji passwords is to save them in Kaspersky Password Manager and insert them into the required input fields automatically.

#emojis #passwords

Guess which of your possessions is the most active at collecting your personal information for analysis and resale?

Your car. According to experts at the Mozilla Foundation, neither smart watches, smart speakers, surveillance cameras, nor any other gadgets analyzed by the Privacy Not Included project come close to the data collection volumes of modern automobiles. This project involves experts examining user agreements and privacy policies to understand how devices use owners’ personal data.

For the first time in the project’s history, absolutely all (25 out of 25) reviewed car brands received a “red card” for unacceptably extensive collection of personal information, lack of transparency in its use, poorly documented data transmission and storage practices (for example, it’s not known whether encryption is used). Even worse, 19 out of 25 brands officially state that they can resell the information they collect. The icing on the cake of such privacy violations is that car owners have almost no ability to opt out of data collection and transmission: only two brands, Renault and Dacia, offer owners the right to delete collected personal data; however, it’s not so easy to even figure out if you should exercise this right.

Buried deep within the license agreements that car buyers usually accept without even reading, there are utterly outrageous violations of privacy rights. For example, the owner’s consent to share their sexual preferences and genetic information (Nissan), disclosure of information upon informal requests from law enforcement agencies (Hyundai), and collection of data on stress levels — all in addition to 160 other data categories with deliberately vague names such as “demographic information”, “images”, “payment information”, “geolocation”, and so on.

The worst brand of all in the ratings was Tesla, which earned, in addition to all the other possible penalty points, a special label: “Untrustworthy AI”.

How cars collect information

Modern cars are literally crammed with sensors — ranging from engine and chassis sensors that measure things like engine temperature, steering wheel angle, or tire pressure, to more interesting ones such as perimeter and interior cameras, microphones, and hand presence sensors on the steering wheel.

All of them are connected on a single bus, so the car’s main computer centrally receives all this information. In addition, all modern cars are equipped with GPS and cellular communication, Bluetooth, and Wi-Fi modules. The presence of cellular communications and GPS in many countries is dictated by the law (to automatically call for help in an accident), but manufacturers happily use this function for the convenience of both the driver – and themselves. You can plan routes on the car’s screen, remotely diagnose malfunctions, start the car in advance… And of course, the “sensors and cameras → car computer → cellular network” bridge creates a constant channel for information collection: where you’re going, where and for how long you park, how sharply you turn the steering wheel and accelerate, whether you use seat belts, and so on.

More information is collected from the driver’s smartphone when it’s connected to the car’s onboard system to make calls, listen to music, navigate, and so on. And if the smartphone is equipped with a mobile app from the car manufacturer for controlling car functions, data can be collected even when the driver is not in the car.

In turn, information about passengers can be collected through cameras, microphones, Wi-Fi hotspots, and Bluetooth functions. With these, it’s easy to find out who regularly travels in the car with the driver, when and where they get in and out, what smartphone they use, and so on.

Why do car manufacturers need this information?

To earn more money. Apart from analysis for “improving the quality of products and services”, the data can be resold, and car features can be adapted for greater profit for the manufacturer.

For example, insurance companies buy information about a particular driver’s driving style to more accurately predict the likelihood of accidents and adjust insurance costs. As early as 2020, 62% of cars were equipped with this controversial function right at the factory, and this figure is expected to rise to 91% by 2025.

Marketing companies are also eager to use such data to target advertising based on the owner’s income, marital status, and social status.

But even without reselling personal data, there are many other unpleasant monetization scenarios, such as enabling or disabling additional car functions through subscriptions, as BMW tried unsuccessfully to do with heated seats, or selling expensive cars on credit with forced vehicle lockdown in case of payment default.

What else is wrong with data collection and telematics?

Even if you think “there’s nothing wrong with ads” and “there’s nothing interesting they could learn about me”, consider the additional risks you and your car are exposed to due to the technologies described above.

Data leaks. Manufacturers actively collect your information and store it permanently — without sufficient protection. Just recently, Toyota admitted to leaking 10 years of data — all collected from millions of cloud-enabled vehicles. Audi had information on 3.3 million customers leaked. Other car manufacturers have also been victims of data breaches and cyberattacks. If this much personal data falls into the hands of real criminals and fraudsters, not just marketers, it could spell disaster.

Theft. Back in 2014, we explored the possibility of stealing a vehicle via cloud functions. Since 2015, it has become clear that criminals remotely taking over a car is not some futuristic fantasy, but a harsh reality. Car thefts in recent years often exploit the remote relaying of signals from a legitimate key fob, but last year’s epidemic of KIA and Hyundai “TikTok hijackings” was based on the car’s smart functions and only required the thief to insert a USB drive.

Surveillance of relatives. When the car does not belong to you, but to a relative or employer, the owner can track the car’s location, set geographical limits for its use, set speed limits and permitted driving times, and even control the volume of the audio system! Many car brands, such as Volkswagen and BMW, offer such features. As we know from our stalkerware research and the recent AirTag tracking scandals, such capabilities are simply crying out to be abused.

How to reduce risks?

Due to the scale of the problem, there are no simple solutions. Therefore, here are some mitigation options in descending order of radicality:

  1. Walk or ride a bicycle.
  2. Buy an old car model. Almost all cars manufactured before 2012 have very limited data collection and transmission capabilities.
  3. Buy a car with a minimal set of “smart” sensors and/or no communication module. Some manufacturers offer basic configurations with limited capabilities, but this requires carefully reading the user manual. The absence of a dedicated communication module (GSM/3G/4G) in the car is a reliable sign of its limited capabilities. Note that more and more cars come with smart features even in basic configurations (this path has already been paved by Smart TVs — they make money by collecting and selling data).
  4. Don’t install the car’s mobile app on your phone. Of course, starting the car from your smartphone or warming it up before you get in is often convenient, but is it necessary to pay for these features with deeply personal information — in addition to the money you spend? Very debatable.
  5. Don’t activate Apple’s CarPlay or Android Auto pairing functions. When these functions are activated, the smartphone OS manufacturer gets all kinds of information from the car, and the car, in turn, retrieves information from the phone.
  6. Don’t connect the car to your phone over Bluetooth or Wi-Fi. This way, again, you lose some functionality, but at least the car won’t send information to the manufacturer through the phone, and nor will it download the phone’s address book and other personal data. You can compromise by establishing a Bluetooth connection only for “headset” and “headphones” protocols: you’ll be able to play music from your phone through the car speakers, but the transmission of other data types (such as the address book) won’t be available.
  7. A bonus tip, which doesn’t exclude the previous ones: Mozilla suggests signing a collective petition to car manufacturers, urging them to change their business model and stop making money by spying on customers. Power to the petitioning people!

#Spies #wheels #carmakers #collect #resell #information

Today, some form of virtualization or containerization can be found in almost all large IT solutions. Containers provide a host of benefits during system development, installation, maintenance, and use. They promote faster development, cost savings, and conservation of other resources. At the same time, many security solutions that work on physical and virtual servers are not directly applicable to containers. What risks should companies consider when implementing containerization, and what measures are needed to protect container infrastructure?

Benefits of containerization in development and operation

A container is an isolated environment for running a single application created by OS kernel-level tools. The container image includes both the application and required settings and auxiliary components, making it very convenient for developers to pack everything they need into the container. Those using such a container find it much easier to operate than old-fashioned infrastructure. What’s more, isolation greatly reduces the influence of containerized applications on each other. In a container infrastructure, therefore, there are fewer causes for failures, while at the same there’s more controllability for administrators.

Containerization is a lighter technology than virtualization: containers don’t emulate hardware, and there’s no need to supply the entire contents of the virtual machine — in particular the guest OS. In many cases, containerized workloads are easier to scale.

Without a doubt, the most common tool for creating and storing container images is Docker, while container workload orchestration is most often implemented with Kubernetes, Docker Swarm, or Red Hat OpenShift.

Containerization has become a key part of modern IT development approaches. Many applications are developed in a microservice architecture: individual features of a large application are allocated to microservices that communicate with other parts of the application through APIs. An example is a video player within a social network or an online store’s payment process. These microservices are often delivered as containers, allowing developers to have their own development and delivery cycle.

Containers dovetail perfectly with the CI/CD (continuous integration/continuous delivery) modern methodology, so application updates get released more quickly and with reduced quantities of bugs. This approach envisages a short development cycle, teams working in parallel on the same code, and automation of routine actions. Containerization in a CI/CD pipeline also improves the efficiency of the pipeline: the CI/CD system uses container images as templates and delivers the build as a ready-to-deploy image. The key point is that updates are delivered in the form of new images — rather than deployed inside an existing and operational container. This speeds up the preparation and debugging of the release, lessens the requirements for the infrastructure of both the developer and customer, improves operational stability, and makes the application easier to scale.

By properly integrating container security requirements into development and build processes, a company takes a big stride toward full implementation of DevSecOps.

Core threats in container infrastructure

The host system, containerization environments, and containerized applications are all susceptible to most of the typical information security risks, such as vulnerabilities in components, insecure settings and the like.

Malicious actors are already actively exploiting all of the above. For example, 1650 container images with malware were found in the public Docker Hub repository. In a similar case, malicious images went undetected for around a year. There are known malicious campaigns that use the Docker API to create malicious containers on targeted systems, disable monitoring systems, and engage in mining. In another attack, threat actors went after Kubernetes clusters with misconfigured PostgreSQL. Another common problem is that outdated container images harboring known vulnerabilities like Log4shell can be stored in repositories for quite some time. Also, developers regularly leave behind API keys and other secrets in containers.

Systematizing the threats to each element in the containerization system, we get this somewhat simplified scheme:

Images Image registry Orchestrator Containers Host OS
Use of untrusted images Unsecured connection Unrestricted administrative access Runtime environment vulnerabilities Shared OS kernel for all containers
Software vulnerabilities Outdated images with vulnerabilities Unauthorized access Unrestricted network access OS component vulnerabilities
Configuration errors Insufficient authentication and authorization Lack of isolation and inspection of inter-container traffic Insecure runtime configuration Incorrect user permissions
Malware   No separation of containers with different levels of data sensitivity across hosts Application vulnerabilities in containers File system accessible from containers
Secrets in plaintext   Orchestrator configuration errors Rogue containers in the runtime environment  

Containers and protection using traditional security tools

Many defenses that have worked well for virtual machines cannot be applied to container security. It’s usually not possible to run an EDR agent inside a container, as in a virtual machine. Moreover, what happens in the container is not fully available for analysis by conventional security systems on the host system. Therefore, detecting, for example, vulnerable and malicious software inside the container is problematic, as is applying protection tools such as WAF in containerized applications. Traffic between containers is often carried over a virtual network at the orchestrator level and might not be accessible to network security tools.

Even on the host OS, an unadapted protection agent can lead to degradation of the performance or stability of deployed containerized applications. Cluster security must be provided at the host level in line with the particular orchestration environment and the nature of the container workloads.

There are also specific issues that must be addressed for container environments — like preventing untrusted containers from running, searching for secrets in containers, and restricting network traffic for each specific container based on its functions. All this is only available in specialized solutions such as Kaspersky Container Security.

What about protection with native tools?

All key containerization vendors appear to be working hard to improve the security of their products. Native Kubernetes tools, for example, can be used to configure resource quotas and logging policies, as well as implement RBAC (role-based access control) with the least-privilege principle. All the same, there are entire classes of information security tasks that cannot be solved with native tools — such as monitoring processes inside a running container, vulnerability analysis, checking compliance with information security policies and best practices, and much more.

But above all, a mature and full-fledged container security system needs to ensure protection at the early stages of containerization: development, delivery, and storage. To achieve this, containerization security has to be built into the development process and integrated with developer tools.

How container protection becomes part of DevSecOps

The DevOps approach has evolved into DevSecOps due to the ever-increasing demands for application reliability and security. To make security an organic part of development, core information security requirements are automatically checked at all phases of application preparation and delivery wherever possible. Container environments facilitate this.

Planning phase: securing VCS and registry operations. Early in the development cycle, software developers select the components, including containerized ones, to be deployed in the application. The security system must scan registry images for up-to-dateness, and analyze configuration files (IaC — in particular, Dockerfile) for errors and insecure settings. Base images used in development need to be scanned for vulnerabilities, malware, secrets, and the like. By doing so, developers significantly reduce the risks of supply-chain compromise.

Build and test phase: securing continuous integration operations. In this phase, it’s necessary to ensure that no secrets, vulnerable versions of libraries, or malware have gotten into the image, and that all information security aspects that can be analyzed comply with the requirements of regulators and the company itself. An application build cannot be completed successfully if there are violated policies. This is done by integrating the container security system with a CI/CD platform, be it Jenkins, Gitlab, or CircleCI. Along with static and dynamic testing of application security (AppSec), this measure is what distinguishes DevSecOps from other development approaches.

Delivery and deployment phase: security at the Continuous Delivery level. Images made operational need to be scanned for both integrity and full compliance with adopted policies. If the situation warrants an exception (for example, a vulnerability is published but not yet patched), it must always be documented and time-limited.

Operation phase: protecting the orchestrator and running containers. Startup and operation control of containers. This phase minimizes the risks associated with vulnerabilities in the runtime environment or its misconfiguration. More importantly, only here is it possible to detect various anomalies in application operation, such as excessive computational load or unexpected communications with other containers and the network as a whole. This step also monitors the secure customization of the orchestrator itself, and also access to it. For container security, native operation with the Kubernetes or OpenShift orchestrator is critical here. At the same time, the host OS itself must not be left unprotected.

To operate at these stages, the container security system itself must be multi-component. The illustration shows the core elements of Kaspersky Container Security and their relationship with the containerization platform and the CI/CD platform.

What protection measures to take for each container environment component?

Let’s look at a more detailed list of protection measures that must be applied to each component in the containerization system to describe its security as comprehensive.

Images Image registry Orchestrator Containers Host OS
Vulnerability assessment Registry integration and image scanning Detection of configuration errors and recommended fixes Startup and operation control of trusted containers only Detection of configuration errors and recommended fixes
Scanning for image configuration errors “Closed list” — usage of only approved and up-to-date images Visualization of resources in the cluster Container integrity monitoring Security risk mitigation through container startup control
Scanning for malware Search for incorrect configurations and access settings Detection and scanning of images in the cluster (search for unaccounted containers) Startup control of applications and services inside containers Adapted OS version to minimize attack surface
Search for secrets     Container traffic monitoring  
Risk assessment and identification of potentially dangerous images     Minimization of container privileges  
      Grouping of containers on hosts by risk/importance level  

The central element of the security system is the in-depth scanning of images. The security system needs to integrate with key registries (such as DockerHub, GitLab Registry, or JFrog Artifactory), both public and corporate, and regularly scan used images in accordance with company policies. Each scan on the list is important in itself, but the risk profile and specifics of applications vary from company to company, so it may be possible, for example, to allow the use of images with low-criticality vulnerabilities. Also, depending on the security policies in place, CIS Kubernetes recommendations or various vulnerability databases, for instance, may be key.

Container images that fail scanning are either simply flagged for administrators, or blocked in later development and deployment phases.

The second, equally important and specific, group of protection tools operates at the container deployment and startup stage. First of all, containers that do not comply with policies and are not included in the trusted lists are prevented from running.

Runtime environment protection is incomplete without inspecting the orchestrator itself. This helps identify configuration errors, non-compliance with security policies, and unauthorized attempts to modify the configuration. Once the containers are running, monitoring orchestrator activity makes it possible to detect and halt suspicious activity both within and between clusters.

Some tasks from the matrix cannot be delegated to a security solution of any kind at all. These include the initial choice of a secure and minimalist OS build specially adapted for running container workloads, plus the crucial task of container grouping. For proper layered protection and convenient management, running containers need to be grouped on hosts so that information with certain security requirements is processed separately from information with lower security requirements. The implementation here depends on the orchestrator in use, but in any case, it’s primarily an exercise in risk assessment and threat modeling.

Generally, there are numerous container protection tasks, and trying to solve each of them in isolation, with one’s own tools or a manual configuration, would cause costs to soar. Hence, medium and large container environments require holistic security solutions that are deeply integrated with the containerization platform, the CI/CD pipeline, and the information security tools used in the company.

The job of information security experts is simplified by: integration with SIEM and channels for notifying about issues detected; regular automatic scanning of all images against an updated vulnerability database (such as NVD); functionality for temporary acceptance of information security risks; and detailed logging of administrative events in the containerization environment-protection system.

How Kaspersky Container Security implements protection

Our comprehensive solution protects container infrastructure by design: its components secure the entire lifecycle of containerized applications — from development to day-to-day operation. The dedicated scanner works with container images and provides static protection; the KCS agent running as a separate container under orchestrator control protects hosts in the runtime environment and the orchestration environment as a whole. Concurrently, the central component of Kaspersky Container Security integrates these parts and provides a management interface.

The high-performance platform offers robust protection for K8s clusters with hundreds of nodes.

The first version of Kaspersky Container Security, which implements core protection for container environments, is already available. And we are committed to developing the product and extending its functionality going forward.

#Full #list #containerization #defenses

Device drivers are irreplaceable programs written specifically for a particular operating system and a particular device (printer, external drive, mouse, etc.). They allow the OS and running applications to use this device by “translating” commands into the language of the device. Some are written by Microsoft itself; others – by third parties. And when we write that Microsoft is “getting to grips” with drivers, we mean that it’s tending to minimize the latter – those written third-parties.

What’s wrong with third-party drivers

Although drivers are indispensable, there are common problems with using them in practice.

  • Compatibility. If the driver installed is incompatible, the device won’t work correctly. And it’s not always possible to keep track of device/driver compatibility using automatic tools.
  • Stability. Since drivers work with devices directly, they have high privileges and often run in kernel mode. Many protection and isolation measures that apply to conventional applications are impracticable with drivers. And that means they’re capable of disrupting the entire system. Poorly written drivers are a common cause of freezes, the Blue Screen of Death, and other problems.
  • Security. Their high privileges make drivers of interest to attackers. If they find a poorly written, vulnerable driver, they can embed functions in it to perform various actions that are usually off-limits to malware, such as disabling your computer’s security or hiding malicious files from detection. Popular among hackers is the Bring Your Own Vulnerable Driver (BYOVD) technique, in which malware gets installed in the system along with a driver containing exploitable security holes. Drivers used in this way range from video card to gaming anti-cheat drivers.
  • Rare updates. All the above issues are compounded by the fact that device manufacturers release driver updates in their own time. Some do so once a month, some once a year, some never.

This complicates life for OS developers, tech support, and users themselves. The only ones who benefit are cybercriminals. To bypass security tools, they could look for vulnerabilities in the operating system itself, but this is quite tricky, and such vulnerabilities, once discovered, get quickly patched. But a vulnerable driver is often never patched, allowing it to run unnoticed — and be exploited — for a long time.

How Microsoft and standardization can solve the driver problem

Put simply, Microsoft wants there to be fewer drivers, and for only the most trusted of coders to be writing them.

Installing Windows used to be a lengthy procedure: after the operating system itself, you had to install three, five… even 10 drivers for your monitor, sound card, printer, scanner, and mouse. Two trends have consigned that to history.

First, Microsoft ships a whole host of drivers with Windows, and many popular devices start working right out of the box. This reduces the chances of downloading corrupted, outdated, or incompatible drivers. However, most drivers are still written by third-party vendors.

Second, the standardization of devices and interfaces has led to entire classes of devices (such as USB drives or mice) communicating with the computer over a common protocol, so that a single driver works with hundreds of devices from different manufacturers.

Microsoft recently announced its next step: the phasing out third-party printer drivers. Going forward, Windows support for any new printer will be through Microsoft’s own IPP Class Driver, and customizations and additions from vendors will be done through Print Support Apps published in the Windows Store. Starting 2025, new printer drivers will no longer be publishable in a Windows Update, and from 2027 this will extend to older drivers as well. True, there’ll be nothing to stop vendors from publishing drivers the old-fashioned way — on their own website, and these drivers will continue to function. However, this will become a niche solution since users are accustomed to convenience.

How to avoid driver threats and problems

  • Try to use standard drivers supplied with Windows. Unless absolutely necessary, do not install proprietary utilities and add-ons from the device manufacturer. Practice shows that an 80 MB mouse driver and a 300 MB printer driver are superfluous to requirements, and the equipment works just fine without them.
  • If you manually install a driver for a device, check for updates regularly. If a driver has been updated, install the latest version right away. Out-of-date drivers create security risks.
  • Before buying a new device, check whether it works with standard drivers. You can do this by reading user reviews or contacting the manufacturer’s technical support. All else being roughly equal, it’s better to choose a device that uses standard drivers.
  • The situation is more complicated if you own outdated equipment in need of exotic drivers that likely haven’t been updated for years. If you can, replace such devices with newer ones equipped with automatically updated standard drivers. If that’s not possible, compensate for this security gap with more stringent security settings: don’t use administrator accounts for regular work; uninstall unused applications.
  • Protect your computer with a full-fledged security solution that prevents the exploitation of vulnerabilities in drivers and other software. Kaspersky products have dedicated components for this: System Watcher and Intrusion Prevention. System monitoring for suspicious activities is activated by default, but you can fine-tune it in the settings.

#Windows #driver #compatibility #security #issues #stay #safe

Over the first 23 years of this century, the Linux operating system has become as ubiquitous as Windows. Although only 3% of people use it on their laptops and PCs, Linux dominates the Internet of Things, and is also the most popular server OS. You almost certainly have at least one Linux device at home — your Wi-Fi router. But it’s highly likely there are actually many more: Linux is often used in smart doorbells, security cameras, baby monitors, network-attached storage (NAS), TVs, and so on.

At the same time, Linux has always had a reputation of being a “trouble-free” OS that requires no special maintenance and is of no interest to hackers. Unfortunately, neither of these things is true of Linux anymore. So what are the threats faced by home Linux devices? Let’s consider three practical examples.

Router botnet

By running malware on a router, security camera, or some other device that’s always on and connected to the internet, attackers can exploit it for various cyberattacks. The use of such bots is very popular in DDoS attacks. A textbook case was the Mirai botnet, used to launch the largest DDoS attacks of the past decade.

Another popular use of infected routers is running a proxy server on them. Through such a proxy, criminals can access the internet using the victim’s IP address and cover their tracks.

Both of these services are constantly in demand in the cybercrime world, so botnet operators resell them to other cybercriminals.

NAS ransomware

Major cyberattacks on large companies with subsequent ransom demands — that is, ransomware attacks, have made us almost forget that this underground industry started with very small threats to individual users. Encrypting your computer and demanding a hundred dollars for decryption — remember that? In a slightly modified form, this threat re-emerged in 2021 and evolved in 2022 — but now hackers are targeting not laptops and desktops, but home file servers and NAS. At least twice, malware has attacked owners of QNAP NAS devices (Qlocker, Deadbolt). Devices from Synology, LG, and ZyXEL faced attacks as well. The scenario is the same in all cases: attackers hack publicly accessible network storage via the internet by brute-forcing passwords or exploiting vulnerabilities in its software. Then they run Linux malware that encrypts all the data and presents a ransom demand.

Spying on desktops

Owners of desktop or laptop computers running Ubuntu, Mint, or other Linux distributions should also be wary. “Desktop” malware for Linux has been around for a long time, and now you can even encounter it on official websites. Just recently, we discovered an attack in which some users of the Linux version of Free Download Manager (FDM) were being redirected to a malicious repository, where they downloaded a trojanized version of FDM onto their computers.

To pull off this trick, the attackers hacked into the FDM website and injected a script that randomly redirected some visitors to the official, “clean” version of FDM, and others to the infected one. The trojanized version deployed malware on the computer, stealing passwords and other sensitive information. There have been similar incidents in the past, for example, with Linux Mint images.

It’s important to note that vulnerabilities in Linux and popular Linux applications are regularly discovered (here’s a list just for the Linux kernel). Therefore, even correctly configured OS tools and access roles don’t provide complete protection against such attacks.

Basically, it’s no longer advisable to rely on widespread beliefs such as “Linux is less popular and not targeted”, “I don’t visit suspicious websites”, or “just don’t work as a root user”. Protection for Linux-based workstations must be as thorough as for Windows and MacOS ones.

How to protect Linux systems at home

Set a strong administrator password for your router, NAS, baby monitor, and home computers. The passwords for these devices must be unique. Brute forcing passwords and trying default factory passwords remain popular methods of attacking home Linux. It’s a good idea to store strong (long and complex) passwords in a password manager so you don’t have to type them in manually each time.

Update the firmware of your router, NAS, and other devices regularly. Look for an automatic update feature in the settings — that’s very handy here. These updates will protect against common attacks that exploit vulnerabilities in Linux devices.

Disable Web access to the control panel. Most routers and NAS devices allow you to restrict access to their control panel. Ensure your devices cannot be accessed from the internet and are only available from the home network.

Minimize unnecessary services. NAS devices, routers, and even smart doorbells function as miniature servers. They often include additional features like media hosting, FTP file access, printer connections for any home computer, and command-line control over SSH. Keep only the functions you actually use enabled.

Consider limiting cloud functionality. If you don’t use the cloud functions of your NAS (such as WD My Cloud) or can do without them, it’s best to disable them entirely and access your NAS only over your local home network. Not only will this prevent many cyberattacks, but it will also safeguard you against incidents on the manufacturer’s side.

Use specialized security tools. Depending on the device, the names and functions of available tools may vary. For Linux PCs and laptops, as well as some NAS devices, antivirus solutions are available, including regularly updated open-source options like ClamAV. There are also tools for more specific tasks, such as rootkit detection.

For desktop computers, consider switching to the Qubes operating system. It’s built entirely on the principles of containerization, allowing you to completely isolate applications from each other. Qubes containers are based on Fedora and Debian.

#Linux #home #protect #Linux #devices #hacking

“Security” and “overtime” go hand in hand. According to a recent survey, one in five CISOs works 65 hours a week, not the 38 or 40 written in their contract. Average overtime clocks in at 16 hours a week. The same is true for the rank-and-file infosec employees — roughly half complain of burnout due to constant stress and overwork. At the same time, staff shortages and budget constraints make it very hard to do the obvious thing: hire more people. But there are other options! We investigated the most time-consuming tasks faced by security teams, and how to speed them up.

Security alerts

The sure winner in the “timewaster” category is alerts generated by corporate IT and infosec systems. Since these systems often number in the dozens, they produce thousands of events that need to be handled. On average, a security expert has to review 23 alerts an hour — even off the clock. 38% of respondents admitted to having to respond to alerts at night.

What to do

  1. Use more solutions from the same vendor. A centralized management console with an integrated alert system reduces the number of alarms and speeds up their processing.
  2. Implement automation. For example, an XDR solution can automate typical analysis/response scenarios and reduce the number of alerts by combining disparate events into a single incident.
  3. Leverage an MSSP, MDR service or commercial SoC. This is the most efficient way to flexibly scale alert handling. Full-time team members will be able to focus on building overall security and investigating complex incidents.

Emails with warnings

Notices from vendors and regulators and alerts from security systems get sent to the infosec team by email — often to a shared inbox. As a result, the same messages get read by several employees, including the CISO, and the time outlays can run to 5–10 hours a week.

What to do

  1. Offload as many alerts as possible to specialized systems. If security products can send alerts to a SIEM or a dashboard, that’s better than email.
  2. Use automation. Some typical emails can be analyzed using simple scripts and transformed into alerts in the dashboard. Emails that are unsuited to this method should be analyzed, scored for urgency and subject matter, and then moved to a specific folder or assigned to a designated employee. You don’t need an AI bot to complete this task; email-processing rules or simple scripts will do the job.

These approaches dramatically reduce the number of emails that require reading and fully manual processing by multiple experts.

Emails flagged by employees

Let’s end the email topic with a look at one last category of attention-seeking messages. If your company has carried out infosec training or is experiencing a major attack, many employees will consider it their duty to forward any suspicious-looking emails to the infosec team. If you have lots of eagle-eyed colleagues on your staff, your inbox will be overflowing.

What to do

  1. Deploy reliable protection at the mail gateway level — this will significantly reduce the number of genuine phishing emails. With specialized defense mechanisms in place, you’ll defeat sophisticated targeted attacks as well. Of course, this will have no impact on the number of vigilant employees.
  2. If your email security solution allows users to “report a suspicious email”, instruct your colleagues to use it so they don’t have to manually process such alerts.
  3. Set up a separate email address for messages with employees’ suspicions so as to avoid mixing this category of emails with other security alerts.

    4. If item 2 is not feasible, focus your efforts on automatically searching for known safe emails among those sent to the address for suspicious messages. These make up a large percentage, so the infosec team will only have to check the truly dangerous ones.

Prohibitions, risk assessments, and risk negotiations

As part of the job, the CISO must strike a delicate balance between information security, operational efficiency, regulatory compliance, and resource limitations. To improve security, infosec teams very often ban certain technologies, online services, data storage methods, etc., in the company. While such bans are inevitable and necessary, it’s important to regularly review how they impact the business and how the business adapts to them. You may find, for example, that an overly strict policy on personal data processing has resulted in that process being outsourced, or that a secure file-sharing service was replaced by something more convenient. As a result, infosec wastes precious time and energy clambering over obstacles: first negotiating the “must-nots” with the business, then discovering workarounds, and then fixing inevitable incidents and problems.

Even if such incidents do not occur, the processes for assessing risks and infosec requirements when launching new initiatives are multi-layered, involve too many people, and consume too much time for both the CISO and their team.

What to do

  1. Avoid overly strict prohibitions. The more bans, the more time spent on policing them.
    2. Maintain an open dialogue with key customers about how infosec controls impact their processes and performance. Compromise on technologies and procedures to avoid the issues described above.
    3. Draw up standard documents and scenarios for recurring business requests (“build a website”, “collect a new type of information from customers”, etc.), giving key departments a simple and predictable way to solve their business problems with full infosec compliance.
  2. Handle these business requests on a case-by-case basis. Teams that show a strong infosec culture can undergo security audits less frequently — only at the most critical phases of a project. This will reduce the time outlays for both the business and the infosec team.

Checklists, reports, and guidance documents

Considerable time is spent on “paper security” — from filling out forms for the audit and compliance departments to reviewing regulatory documents and assessing their applicability in practice. The infosec team may also be asked to provide information to business partners, who are increasingly focused on supply chain risks and demanding robust information security from their counterparties.

What to do

  1. Invest time and effort in creating “reusable” documents, such as a comprehensive security whitepaper, a PCI Report on Compliance, or a SOC2 audit. Having such a document helps not only with regulatory compliance, but also with responding quickly to typical requests from counterparties.
  2. Hire a subspecialist (or train someone from your team). Many infosec practitioners spend a disproportionate amount of time formulating ideas for whitepapers. Better to have them focus on practical tasks and have specially trained people handle the paperwork, checklists, and presentations.
  3. Automate processes — this helps not only to shift routine control operations to machines but to correctly document them. For example, if the regulator requires periodic vulnerability scan reports, a one-off resource investment in an automatic procedure for generating compliant reports would make sense.

Selecting security technologies

New infosec tools appear monthly. Buying as many solutions as possible won’t only balloon the budget and the number of alerts, but also create a need for a separate, labor-intensive process for evaluating and procuring new solutions. Even leaving tenders and paperwork aside, the team will need to conduct market research, evaluate the contenders in depth, and then carry out pilot implementation.

What to do

  1. Try to minimize the number of infosec vendors you use. A single-vendor approach tends to improve performance in the long run.
    2. Include system integrators, VARs, or other partners in the evaluation and testing process when purchasing solutions. An experienced partner will help weed out unsuitable solutions at once, reducing the burden on in-house infosec during the pilot implementation.

Security training

Although various types of infosec training are mandatory for all employees, their ineffective implementation can overwhelm the infosec team. Typical problems: the entire training is designed and delivered in-house; a simulated phishing attack provokes a wave of panic and calls to infosec; the training isn’t tailored to the employees’ level, potentially leading to an absurd situation where infosec itself undergoes basic training because it’s mandatory for all.

What to do

Use an automated platform for employee training. This will make it easy to customize the content to the industry and the specifics of the department being trained. In terms of complexity, both the training materials and the tests adapt automatically to the employee’s level; and gamification increases the enjoyment factor, raising the successful completion rate.

#boost #performance #infosec #team

Videocalls became much more widespread after the COVID-19 pandemic began, and they continue to be a popular alternative to face-to-face meetings. Both platforms and users soon got over the teething problems, and learned to take basic security measures when hosting videoconferences. That said, many online participants still feel uncomfortable knowing that they might be recorded and eavesdropped on all the time. Zoom Video Communications, Inc. recently had to offer explanations regarding its new privacy policy, which states that all Zoom videoconferencing users give the company the right to use any of their conference data (voice recordings, video, transcriptions) for AI training. Microsoft Teams users in many organizations are well aware that turning on recording means activating transcription as well, and that AI will even send premium subscribers a recap. For those out there who discuss secrets on videocalls (for instance in the telemedicine industry), or simply have little love for Big Tech Brother, there are less known but far more private conferencing tools available.

What can we protect ourselves against?

Let’s make one thing clear: following the tips below isn’t going to protect you from targeted espionage, a participant secretly recording a call, pranks, or uninvited guests joining by using leaked links. We already provided some videoconferencing security tips that can help mitigate those risks. Protecting every participant’s computer and smartphone with comprehensive cybersecurity — such as Kaspersky Premium — is equally important.

Here, we focus on other kinds of threats such as data leaks from the videoconferencing platform, misuse of call data by the platform, and the harvesting of biometric information or conference content. There are two possible engineering solutions to these: (i) hosting the conference entirely on participant computers and servers, or (ii) encrypting it, so that even the host servers have no access to the meeting content. The latter option is known as end-to-end encryption, or E2EE.

Signal: a basic tool for smaller group calls

We have repeatedly described Signal as one of the most secure private instant messaging apps around, but Signal calls are protected with E2EE as well. To host a call, you have to set up a chat group, add everyone you want to call, and tap the videocall button. Group videocalls are limited to 40 participants. Admittedly, you’re not getting any business conveniences such as call recording, screen sharing, or corporate contact-list invitations. Besides, you’ll need to set up a separate group for each meeting, which works well for regular calls with the same people, but not so much if the participants change every time.

Signal lets you set up videoconferences for up to 40 participants in a familiar interface

WhatsApp and Facetime: just as easy — but not without their issues

Both these apps are user-friendly and popular, and both support E2EE for videocalls. They share all the shortcomings of Signal, adding a couple of their own: WhatsApp is owned by Meta, which is a privacy red flag for many, while Facetime calls are only available to Apple users.

Jitsi Meet: self-hosted private videoconferencing

The Jitsi platform is a good choice for large-scale, fully featured, but still private meetings. It can be used for hosting meetings with: dozens to hundreds of participants, screen sharing, chatting and polling, co-editing notes, and more. Jitsi Meet supports E2EE, and the conference itself is created at the moment the first participant joins and self-destructs when the last one disconnects. No chats, polls or any other conference content is logged. Finally, Jitsi Meet is an open-source app.

Jitsi Meet is a user-friendly, cross-platform videoconferencing tool with collaboration options. It can be self-hosted or used for free on the developer’s website

Though the public version can be used for free on the Jitsi Meet website, the developers strongly recommend that organizations deploy a Jitsi server of their own. Paid hosting by Jitsi and major hosting providers is available for those who’d rather avoid spinning up a server.

Matrix and Element: every type of communication — fully encrypted

The Matrix open protocol for encrypted real-time communication and the applications it powers — such as Element — are a fairly powerful system that supports one-on-one chats, private groups and large public discussion channels. The Matrix look-and-feel resembles Discord, Slack and their forerunner, IRC, more than anything else.

Connecting to a Matrix public server is a lot like getting a new email address: you select a user name, register it with one of the available servers, and receive a matrix address formatted as That allows you to talk freely to other users including those registered with different servers.

Even a public server makes it easy to set up an invitation-only private space with topic-based chats and videocalls.

The settings in Element are slightly more complex, but you get more personalization options: chat visibility, permission levels, and so on. Matrix/Element makes sense if you’re after team communications in various formats, such as chats or calls, and on various topics rather than just a couple of odd calls. If you’re simply looking to host a call from time to time, Jitsi works better — the call feature in Element even uses Jitsi code.

Element is a fully featured environment for private conversations, with video chats just one of the available options

Corporations are advised to use the Element enterprise edition, which offers advanced management tools and full support.

Zoom: encryption for the rich

Few know that Zoom, the dominant videoconferencing service, has an E2EE option too. But to enable this feature, you need to additionally purchase the Large Meetings License, which lets you host 500 or 1000 participants for $600–$1080 a year. That makes the price of E2EE at least $50 per month higher than the regular subscription fee.

Zoom supports videoconferencing with E2EE too, but you need an extended license to be able to use it

You can enable encryption for smaller meetings as well, but still only if you have a Large Meeting License. According to the Zoom website, activating E2EE for a meeting disables most familiar features, such as cloud recording, dial-in, polling and others.

#Top #apps #encrypted #private #videocalls

The creators of any website bear the moral and legal responsibility for it during its entire existence. Moreover, few people know that if a corporate web server gets hacked, it’s not only the company and its customers that may suffer; often, a hacked site becomes a platform for launching new cyberattacks, with its owners not even being aware of it.

Why websites get hacked

A website hack can be part of a larger cyberattack, or a standalone operation. By “hack”, we mean making changes to the target site — not to be confused with a DDoS attack. If your company finds itself in the crosshairs of hackers, their goals are usually to:

  • Exert pressure on the victim organization as part of a ransomware attack, including by making the hack known to customers and partners;
  • Download valuable information from the site, for example, customer contact details stored in a database;
  • Distract IT and InfoSec teams from a more serious data theft or sabotage attack occurring at the same time;
  • Cause reputational damage.

That said, very often hackers don’t need your site in particular. They’ll happily make do with any reputable site they can sneak malicious content onto. Once that’s achieved, they can populate the site with phishing pages, links to spam resources, and pop-up ads. Basically, it turns into a cybercriminal tool. At the same time, the main sections of the site may be unaffected. Customers and employees visiting the home page won’t notice anything different. The malicious content is tucked away in new subfolders to which victims get lured through direct links.

How websites get hacked

Website hacks are normally carried out through vulnerabilities in server applications: web servers, databases, or content management systems and their add-ons. Around 43% of all websites on the internet run on WordPress, so it’s no surprise that hackers pay special attention to this content management system. Vulnerabilities are discovered in WordPress and thousands of add-ons for it regularly, and not all authors get around to fixing their plug-ins. And besides, not all users promptly install updates for their sites.

Attackers can exploit a vulnerability to upload to the web server a so-called web shell; that is, additional files and scripts allowing them to manage site content while bypassing standard administration tools. Next, they place malicious content on the site in subfolders, taking pains not to affect the main pages of the legitimate site.

Another common hacking scenario is to guess the administrator password. This is possible if the administrator uses weak passwords, or the same password on different web resources. In this way, cybercriminals can place malicious content by means of standard administration tools, creating new users on the site, as well as additional subsections or pages. However, this increases the likelihood of detection, so even in this case, attackers prefer to install their own backdoor in the shape of a web shell.

Damage from website hacking

In case of a large case targeted attack, the given company immediately suffers financial and reputational damage. As for opportunistic attacks, the harm is indirect. Website maintenance costs can increase due to spam content and its views. At the same time, the site’s SEO reputation drops, so it gets fewer visitors from search engines. The site may even be flagged as malicious, in which case its traffic drops catastrophically. In practice, however, hackers may go for abandoned sites, so issues with traffic are of no relevance.

How websites get abandoned

The internet has long turned into a website graveyard. According to statistics, there are more than 1.1 billion websites in total, but 82% of them are not updated or maintained. In the case of corporate websites, a number of scenarios can be the cause:

  • A company ceases to operate, but its website is published on free hosting and keeps running;
  • The only employee who had access to the site leaves the given small business. Unless the owners take action, the site will remain frozen for months or even years;
  • A company rebrands or merges, but keeps the old website “temporarily” for customers. The revamped entity then gets a brand-new site, and the “temporary” old one is gradually forgotten;
  • A dedicated site is launched for a marketing campaign, product line, blog, or side project. When the project is over, the site is no longer updated, but it’s not shut down either.

Signs of website hacking

Since the main pages are often left untouched by hackers, it can be difficult to tell if your site has been compromised. But there are some pointers: the site is running slower than usual; traffic has sharply increased or decreased for no apparent reason; new links or banners have appeared out of nowhere; problems with control panel access; new folders, files, or users can be seen in the control panel. Still, the most obvious sign is if others start bombarding you with complaints about malicious content on your site. To properly diagnose the situation, you need to study the web server logs, but this task is better entrusted to experts. Like pest control, it takes experience to get rid of an infestation — which here means removing the web shell and other backdoors from the site.

How to guard against website hacking

Even small companies without a large cybersecurity budget can implement simple measures that greatly reduce the chances of getting hacked:

  • Set long, strong passwords for the administration section of your site, and enable two-factor authentication. Each administrator must have their own password;
  • Never allow just one person to have access to the site (unless the company has just one employee, naturally). Remember to revoke access when employees leave;
  • Make sure to keep updated all software components of the site, including the operating system, web server, databases, content management system, and add-ons. Install updates as soon as they are released. If your company lacks the time or expertise, better to use professional website hosting where security is in the hands of a dedicated team. For example, for WordPress there are specialized secure hosting platforms, such as WP Engine;
  • Maintain a registry of all company websites. It should list every site created, even temporary ones set up, say, for a one-month ad campaign;
  • Each site in the registry should have its software components updated regularly, even if there’s no business need to update the content;
  • If the site is no longer needed, and the resources are lacking to update it, better to close it down in a tidy manner. Save the data to an archive, then terminate your hosting account. If necessary, you can also cancel the domain delegation. Another way to shut down a subsite is to remove all content from it, disable any software add-ons like WordPress, and set up redirection to the company’s main site.

#Ways #protect #WordPress #sites #blogs #hacking

All large companies have formal processes for both onboarding and offboarding. These include granting access to corporate IT systems after hiring, and revoking said access during offboarding. In practice, the latter is far less effective — with departing employees often retaining access to work information. What are the risks involved, and how to avoid them?

How access gets forgotten

New employees are granted access to the systems they need for their jobs. Over time, these accesses accumulate, but they’re not always issued centrally, and the process itself is by no means always standardized. Direct management might give access to systems without notifying the IT department, while chats in messenger apps or document-exchange systems get created ad hoc within a department. Poorly controlled access of this kind is almost certain not to be revoked from an offboarded employee.

Here are some typical scenarios in which IT staff may overlook access revocation:

  • The company uses a SaaS system (Ariba, Concur, Salesforce, Slack… there are thousands of them) that’s accessed by entering a username and password entered by the employee at first log in. And it isn’t integrated with the corporate employee directory.
  • Employees share a common password for a particular system. (The reason may be saving money by using just one subscription or lacking a full multi-user architecture in a system.) When one of them is offboarded, no one bothers to change the password.
  • A corporate system allows login using a mobile phone number and a code sent by text. Problems arise if an offboarded employee keeps the phone number they used for this purpose.
  • Access to some systems requires being bound to a personal account. For example, administrators of corporate pages on social media often get access by assigning the corresponding role to a personal account, so this access needs to be revoked in the social network as well.
  • Last but not least is the problem of shadow IT. Any system that employees started using and run by themselves is bound to fall outside standard inventory, password control and other procedures. Most often, offboarded employees retain the ability to perform collaborative editing in Google Docs, manage tasks in Trello or Basecamp, share files via Dropbox and similar file-hosting services, as well as access work and semi-work chats in messenger apps. That said, pretty much any system could end up in the list.

The danger of unrevoked access

Depending on the role of the employee and the circumstances of their departure, unrevoked access can create the following risks:

  • The offboarded employee’s accounts can be used by a third party for cyberattacks on the company. A variety of scenarios are possible here — from business email compromise to unauthorized entry to corporate systems and data theft. Since the departed employee no longer uses these accounts, such activity is likely to go unnoticed for a long time. Forgotten accounts may also use weak passwords and lack two-factor authentication, which simplifies their takeover. No surprise, then, that forgotten accounts are becoming very popular targets for cybercriminals.
  • The offboarded employee might continue to use accounts for personal gain (accessing the customer base to get ahead in a new job; or using corporate subscriptions to third-party paid services).
  • There could be a leak of confidential information (for example, if business documents are synchronized with a folder on the offboarded employee’s personal computer). Whether the employee deliberately retained this access to steal documents or it was just plain forgetfulness makes little difference. Either way, such a leak creates long-term risks for the company.
  • If the departure was acrimonious, the offboarded employee may use their access to inflict damage.

Additional headaches: staff turnover, freelancing, subcontractors

Keeping track of SaaS systems and shadow IT is already a handful, but the situation is made worse by the fact that not all company offboarding processes are properly formalized.

An additional risk factor is freelancers. If they were given some kind of access as part of a project, it’s extremely unlikely that IT will promptly revoke it — or even know about it — when the contract expires.

Contracting companies likewise pose a danger. If a contractor fires one employee and hires another, often the old credentials are simply given to the new person, rather than deleted and replaced with new ones. There’s no way that your IT service will know about the change in personnel.

In companies with seasonal employees or just a high turnover in certain positions, there’s often no full-fledged centralized on/offboarding procedure — just to simplify the business operation. Therefore, you can’t assume they’ll perform an onboarding briefing or operate a comprehensive offboarding checklist. Employees in these jobs often use the same password to access internal systems, which can even be written on a Post-It right next to the computer or terminal.

How to take control

The administrative aspect is key. Below are a few measures that significantly mitigate the risk:

  • Regular access audits. Carry out periodic audits to determine what employees have access to. The audit should identify accesses that are no longer current or were issued unintentionally or outside of standard procedures, and revoke them as necessary. For audits, a technical analysis of the infrastructure is not enough. In addition, surveys of employees and their managers should be carried out in one form or another. This will also help bring shadow IT out of the shadows and in line with company policies.
  • Close cooperation between HR and IT during offboarding. Departing employees should be given an exit interview. Besides questions important for HR (satisfaction with the job and the company; feedback about colleagues), this should include IT issues (request a complete list of systems that the employee used on a daily basis; ensure that all work information is shared with colleagues and not left on personal devices, etc.). The offboarding process usually involves signing documents imposing responsibility on the departing employee for disclosure or misuse of such information. In addition to the employee, it’s advisable to interview their colleagues and management so that IT and InfoSec are fully briefed on all their accounts and accesses.
  • Creation of standard roles in the company. This measure combines technical and organizational aspects. For each position and each type of work, you can draw up a template set of accesses to be issued during onboarding and revoked during offboarding. This lets you create a role-based access control (RBAC) system and greatly simplify the work of IT.

Technical measures to facilitate access control and increase the overall level of information security:

  • Implementing Identity and Access Management systems and Identity Security The keystone here would be a single sign-on (SSO) solution based on a centralized employee directory.
  • Asset and Inventory Tracking to centrally track corporate devices, work mobile phone numbers, issued licenses, etc.
  • Monitoring of outdated accounts. Information security tools can be used to introduce monitoring rules to flag accounts in corporate systems if they have been inactive for a long time. Such accounts must be periodically checked and disabled manually.
  • Compensatory measures for shared passwords that have to be used (these need to be changed more often).
  • Time-limited access for freelancers, contractors and seasonal employees. For them, it’s always best to issue short-term accesses, and to extend/change them only when necessary.

#Measures #protect #data #employee #leaves

  • 1
  • 2