Data Protection Academy » Data Protection Wiki » AI and data protection

AI and data protection

AI and data protection in practice - between innovation and regulation

The rapid spread of artificial intelligence (AI) in organisations requires an early and integrated consideration of data protection in this area AI and data protection. Although the AI Regulation (KI-VO) creates a framework for high-risk AI, compliance with existing data protection laws such as the GDPR and BDSG remains essential. The decisive factor here is that AI is primarily a tool for achieving objectives (e.g. process optimisation); the data protection law basis results from the respective use case and purpose in accordance with the GDPR. This requires a holistic approach, from a clear definition of purpose and DPIA to data minimisation and transparency. The AI Regulation itself also addresses aspects relevant to data protection, such as training data. This article sheds light on how we can combine AI and data protection not only as a challenge, but to proactively shape solutions for a responsible future.

Key information on AI and data protection

  • Data protection must be considered on an application-specific basis. The GDPR applies regardless of the technology. When using AI, the specific purpose of the processing is decisive for the legal basis and compliance with data protection principles.
  • AI Regulation supplements the GDPR, but does not replace it. The AI Regulation creates specific rules for AI systems, in particular high-risk AI, and specifies data protection obligations (e.g. transparency, data quality). The However, the GDPR remains the foundation for the protection of personal data.
  • Transparency and information are particularly challenging and important when using AI. Due to the complexity of many AI systems, organisations must make special efforts to provide data subjects with clear information about data processing, automated decisions and their rights.
  • The Data protection principles of the GDPR also apply to AI and require specific considerations. From legality, purpose limitation and data minimisation to accountability, all principles must be carefully examined and implemented in the context of AI systems.
  • AI can also be used to Support for GDPR compliance can be used. Automation of routine tasks (document analysis, data subject rights), monitoring compliance, analysing data flows and detecting data protection violations are possible fields of application. Data protection should therefore not be seen as an obstacle to innovation, but as an integral part of AI projects.

Between innovation and regulation

Artificial intelligence (AI) is currently caught between the pressure to innovate and regulatory responsibility like no other topic. According to the Bitkom guide By 2024, 22 % of employees will be using generative AI with their employer's knowledge - and the trend is rising. However, this development is also increasing the complexity of data protection requirements. The functioning of AI systems, whether in the form of machine learning (ML), the analysis of huge amounts of data (big data analytics) or the use of large language models (LLMs), raises important data protection issues. How can these technologies be reconciled with the principles of the GDPR?

AI basics for data protectionists:

  • Algorithms as a basis: Every AI is based on algorithms - detailed instructions that computers follow.
  • Learning from data: ML systems "learn" from data in order to recognise patterns and make predictions. The more data, the better the results often are.
  • Big data as fuel: Analysing large amounts of data is essential for many AI applications in order to find meaningful patterns.
  • Speech processing of the future: LLMs enable machines to understand and generate human speech, enabling new forms of interaction.

Legal framework for the processing of personal data using AI

The processing of personal data by artificial intelligence (AI) is subject to a complex interplay of different legal standards. The most important pillars of this framework are the General Data Protection Regulation (GDPR) and the future AI Regulation.

The GDPR: The foundation of data protection

The General Data Protection Regulation (GDPR) forms the central foundation for the protection of personal data in the European Union. It regulates the processing of personal data by public and private bodies. The processing of personal data by AI systems must always be based on a lawful basis. Whether consent, contract or legitimate interest - the choice of legal basis requires careful consideration. In addition, the purpose of the data processing must be clearly defined and the amount of data must be limited to the necessary minimum.

Here is the link to the blog post "General Data Protection Regulation (EU GDPR)"

The AI Regulation: Specific rules for artificial intelligence

With the AI Regulation, the European Union has created the world's first comprehensive legal framework for artificial intelligence. The aim of the regulation is to establish standardised rules for the development, marketing and use of AI systems and, in particular, to protect the safety and fundamental rights of citizens.

For the first time, the EU AI Regulation creates a comprehensive legal framework for AI that protects security and fundamental rights. With regard to personal data, it supplements and concretises the GDPR with a risk-based approach with strict requirements for high-risk AI in terms of transparency, accuracy and human oversight. Specific transparency obligations apply to AI systems that interact with people or create synthetic content, including the disclosure of the use of personal data in training. Providers and users of AI systems have obligations to ensure safe and fundamental rights-compliant use, including the protection of personal data. The AI Regulation builds on the GDPR and clarifies it for AI, with the GDPR remaining the basis for data protection and the AI Regulation setting out additional requirements for AI systems. The legal framework for the processing of personal data using AI is created by the interaction of the GDPR and the AI Regulation, which requires careful consideration of both laws for companies to fulfil both general and specific AI requirements for lawful and responsible use.

Link to the blog post "AI regulation - current status"

Requirements of the Global Privacy Assembly

The Global Privacy Assembly (GPA) has called for measures for the responsible use of artificial intelligence (AI) that go beyond the requirements of the General Data Protection Regulation (GDPR). These include, in particular, the assessment and disclosure of potential impacts on human rights, including the protection of data and privacy, prior to the use of AI. Furthermore, keeping detailed records of the impact assessment, design, development, testing and use of AI systems is considered essential. Another important point is to ensure transparency and openness by disclosing the use of AI, the data used and the underlying logic. These additional accountability obligations are aimed at comprehensively addressing the potential risks of AI and strengthening trust in these technologies.

The "black box" of AI: a challenge for transparency and data subject rights

One of the biggest challenges in the area of conflict between AI and data protection is the lack of transparency of many AI systems, often referred to as "black boxes". With complex machine learning models in particular, it is often difficult to understand exactly how decisions or results are arrived at. This lack of traceability can make it considerably more difficult to comply with the transparency obligations of the GDPR (Art. 12-14) and hinder the effective exercise of data subjects' rights, such as the right of access (Art. 15 GDPR) or the right to an explanation of automated decisions (Art. 22 GDPR).

To counter this problem, companies are required to take proactive measures. This includes detailed documentation of design decisions and the data used. In addition, alternative, more transparent AI architectures and methods should be considered ("Explainable AI" or XAI), which enable better traceability of decision-making. Close cooperation between data protection experts and IT managers is essential in order to develop strategies that guarantee both the innovative power of AI and the rights and protection of data subjects. Only through such an interdisciplinary approach can the "black box" of AI be illuminated to a certain extent and data protection-compliant use ensured.

Our recommendations for further information

GDPR-compliant processing of personal data by AI

Many organisations are asking themselves the question: Is the use of AI compatible with the GDPR at all? The clear answer: Yes - under certain conditions. The use of AI technology does not automatically mean a breach of data protection. The GDPR is formulated in a technology-neutral way and ensures that new technologies can be integrated in a legally compliant manner, provided that certain principles are adhered to. It becomes particularly critical when AI systems process personal data, e.g. to make decisions about individuals. In these cases, stricter requirements such as Art. 22 GDPR (automated decisions in individual cases including profiling) apply.

AI and the data protection principles under Article 5 of the GDPR: A practical guide

Artificial intelligence (AI) offers enormous opportunities, but also presents organisations with complex data protection challenges. Compliance with the General Data Protection Regulation (GDPR) is essential. This article highlights the central data protection principles of Article 5 GDPR in the context of AI and provides practical advice for organisations.

1. lawfulness: the basis of all AI processing

Every use of AI, from development to training to application, must be based on a valid legal basis (Art. 6 and 9 GDPR). As the GDPR does not mention any specific AI bases, the general legal bases apply. What is important is that data protection must be integrated into AI development from the outset (privacy by design, Art. 25 GDPR). The purpose limitation is crucial for training data. Can data that has already been collected be used for AI training? Careful scrutiny is required here. The upcoming AI Regulation could allow an exception for innovation purposes in test environments under certain conditions (Art. 54 AI Regulation). Contract fulfilment (Art. 6 para. 1 lit. b) GDPR) may apply to AI systems that are an integral part of a service (e.g. generative AI). In the case of supporting AI (e.g. chatbots), the necessity must be examined on a case-by-case basis.

If consent is used as a basis, it must be comprehensible, informed and voluntary (Art. 4 para. 11 GDPR). The transparency obligations (Art. 12 et seq. GDPR) require simple and precise information, which can be difficult with complex AI. It must be technically feasible to withdraw consent (privacy by design!). In the employment relationship, the voluntary nature of consent must be examined particularly critically.

2. good faith: avoid unexpected benefits

The principle of fairness requires fair and transparent data processing. In the case of AI, this means avoiding hidden, unexpected or disproportionate uses. Especially when training with large amounts of data (big data), the proportionality of the scope is crucial. Organisations must be able to understand what the AI model has learned with and from and whether the use of their own data impairs the right to be forgotten. Hidden AI use or algorithmic discrimination based on one-sided training data is unfair. A risk assessment for potential discrimination must be carried out before use, particularly with regard to equal opportunities in organisations.

3. transparency: comprehensible information is mandatory

Data subjects must be informed transparently and comprehensibly about the processing of their data by AI (Art. 13, 14 GDPR). Technically complex AI solutions must be explained simply. Organisations should supplement their data protection declarations with information on the use of AI, purpose, logic of automated decisions and potential risks. The upcoming AI regulation will bring additional transparency obligations (e.g. AI labelling) depending on the risk class.

4. earmarking: Do not misuse data for new purposes

The further processing of data already collected in AI systems for new, incompatible purposes is generally not permitted. Exceptions only apply for archiving, research or statistical purposes if these are compatible with the original purposes. The use of anonymised or publicly accessible data, on the other hand, is unproblematic. Many developers and product managers underestimate the importance of purpose limitation. Yet it is the centrepiece of data protection-compliant processing. Every AI project needs it:

  • A clear definition of the processing purpose
  • Documentation and verifiability of the purpose
  • Mechanisms for deletion or anonymisation after purpose fulfilment

Practical example: In a project to optimise personnel planning, we wanted to use historical employee data. The problem: this data was originally collected for payroll purposes. Further use was not permitted without the consent of the data subjects. Solution: Pseudonymisation of the data and redefinition of the processing purpose with a data protection impact assessment.

5. data minimisation: only process what is absolutely necessary

Data minimisation (Art. 5 para. 1 lit. c) GDPR) requires that only the data required for the respective purpose is processed. In AI training with large amounts of data, a careful proportionality check between the amount of data and training efficiency is necessary. The irreversible anonymisation of training data can be a data protection-compliant solution.

6. correctness: avoid incorrect AI results

Personal data must be factually correct (Art. 5 para. 1 lit. d) GDPR). AI systems, especially large language models, can generate so-called hallucinations - false but plausible-sounding information. Organisations must therefore critically examine and verify AI results in order to comply with the right to rectification (Art. 16 GDPR).

7. accountability: being able to demonstrate compliance

Organisations must not only ensure compliance with the GDPR, but also be able to prove it (Art. 5 para. 2 GDPR). This requires appropriate technical and organisational measures, including a processing directory, data protection guidelines, documentation of data protection violations, order processing contracts, data protection impact assessments and privacy by design/default. This evidence must also include compliance with data protection principles when using AI and their documentation. For AI, the Global Privacy Assembly also requires impact assessments, registers of AI development and use as well as transparency regarding AI use, data and logic.

Transparency, information obligations and automated decisions: Trust through information

Transparency in the context of AI goes far beyond a standard data protection notice on the website. Data subjects have a right to be informed in detail about:

  • What data is collected in connection with the AI system.
  • How this data is processed and what it is specifically used for.
  • Whether automated decision-making takes place and the logic behind it.
  • What rights you have with regard to your data.

The transparency principles of the GDPR (Art. 5 para. 1 lit. a) explicitly require comprehensible information when processing personal data, which poses a particular challenge in the complex field of AI. Transparency is not only a legal obligation (Art. 12-14, Art. 6, 7 GDPR and, in future, the AI Regulation), but also creates the necessary acceptance for the use of AI systems.

The detailed information obligations under Art. 13 and 14 GDPR

Articles 13 and 14 of the GDPR set out detailed information obligations that controllers must fulfil when collecting personal data. This includes information about the controller itself, the specific processing purposes and the respective legal bases. Recipients of the data, the planned storage period, the rights of data subjects (information, rectification, erasure, etc.), the obligation to provide the data and the existence of automated decision-making, including profiling, must also be disclosed. If the data is not collected directly from the data subject (Art. 14 GDPR), information on the origin of the data must also be provided.

Special challenges with AI: Automated decisions and complexity

In the context of AI, the specific implementation of transparency and, in particular, information about automated decisions and profiling is of particular importance. The complexity here is often increased by AI-specific information, which will also be supplemented by the AI Regulation in the future. In order to fulfil these information obligations, various solutions can be considered, such as links to more detailed explanations, QR codes, easy-to-understand symbols or supplementary paper documents. The reasonableness of obtaining information for the persons concerned must always be taken into account.

Automated decisions (Art. 22 GDPR): Making the logic understandable

Particular attention must be paid to information on automated decisions in individual cases in accordance with Article 22 (1) and (4) GDPR. Here, meaningful information must be provided about the logic involved, the scope and the intended effects of the automated decision. In the case of AI applications that have direct customer contact, such as automated claims settlement, this information can also be provided outside of the actual application. However, full disclosure of the exact logic also harbours risks of misuse. In many cases, a data protection impact assessment (DPIA) in accordance with Article 35(1) and (3)(a) GDPR is required in connection with automated decisions by AI. In addition, it is strongly recommended not to base significant decisions solely on AI, but to implement a "human-in-the-loop" strategy in which human review and intervention remain possible.

According to Art. 22 GDPR, data subjects must not be subject to a decision based solely on automated processing which produces legal effects concerning them or significantly affects them. Data controllers should ensure that

  • That people are involved in the decision-making processes
  • That comprehensible criteria are used for evaluation
  • That data subjects are informed about the logic and consequences of the processing

Data subject rights remain in place when using AI

Even if artificial intelligence is based on complex algorithms, the rights of data subjects under the GDPR remain fully valid. This means that your right to information (Art. 15 GDPR), rectification (Art. 16 GDPR), erasure (Art. 17 GDPR), restriction of processing (Art. 18 GDPR), data portability (Art. 1 20 GDPR) and objection (Art. 21 GDPR) must also be guaranteed in the context of AI applications. The frequent lack of transparency of AI systems ("black boxes") poses a particular challenge. In order to overcome this and effectively guarantee data subjects' rights, detailed documentation of design decisions, the examination of alternative, more transparent AI approaches ("explainable AI") and close cooperation between data protection and IT officers are essential.

The processing directory as the key to transparency

A central instrument for maintaining an overview and creating transparency is the record of processing activities (RPA) in accordance with Article 30 GDPR. All processing of personal data must be documented. Of course, this also applies to the use of artificial intelligence. Whether for the analysis of customer data, the automation of decision-making processes or the optimisation of marketing campaigns - if AI processes personal data, this must be recorded in the RPA.

A detailed VVT is essential for AI

A carefully managed VVT is crucial, especially in the context of AI applications, in order to:

  • Ensure accountability. Organisations must be able to prove that data processing is lawful.
  • fulfil the rights of data subjects: Only if organisations know which data is processed and how can requests from data subjects (e.g. information, deletion) be answered correctly.
  • Assess risks: Transparent documentation helps to identify potential risks to the rights and freedoms of natural persons and to take appropriate protective measures.
  • Shaping cooperation with AI providers: A clear VVT helps with the definition of responsibilities and the design of order processing contracts.

Create transparency in the VVT for AI applications

To counteract the lack of transparency of AI systems and create a meaningful VVT, you should consider the following aspects:

  • Detailed description of the processing activity: Describe precisely which specific tasks the AI application performs in the processing process.
  • Information on the types and categories of data used: Document in detail which types and categories of personal data are processed by the AI.
  • Purpose of the processing: Explain clearly and comprehensibly the specific purpose of the processing of the data by the AI application. General formulations are not sufficient here.
  • Recipients or categories of recipients: Indicate to whom the data processed by the AI may be disclosed (e.g. other departments, third-party providers).
  • Deadlines for the deletion of data: Define how long the personal data processed by the AI will be stored and when it will be deleted. This can be a particular challenge for AI systems that are continuously learning.
  • Information on the AI provider (if applicable): If you use AI services from external providers, document them as processors and record the corresponding order processing agreements.
  • Control measures to ensure transparency: Document the measures you have implemented to obtain detailed information about data processing by the AI application and to shed light on the "black box". This can include, for example, the regular review of logs, inspection of the provider's documentation or internal audits.
  • Threshold analysis: Analyse the threshold and create a data protection impact assessment if necessary.

Data protection impact assessment (DPIA) for AI: a must

When processing personal data using AI, it is essential to carry out a DPIA in accordance with Art. 35 GDPR, as AI systems can harbour high risks for data subjects due to discrimination risks and a lack of control options. A threshold analysis determines whether there is a high risk and therefore a DPIA is mandatory; the decision must be documented in writing. The AI Regulation and the GDPR complement each other here, whereby high-risk AI systems are also likely to pose a high risk under data protection law according to the AI Regulation. In particular, the systematic assessment of personal aspects by AI (profiling), the processing of sensitive data or the comprehensive monitoring of publicly accessible areas require a DPIA. The GDPR explicitly lists the use of AI to control interactions or evaluate personal aspects as requiring a DPIA. The supervisory authority must be consulted in the event of a high risk without risk minimisation measures (Art. 36 GDPR). In addition, the DPIA obligation in the AI context may require the appointment of a DPO in accordance with Section 38 BDSG. As the risk assessment is often opaque, especially in the case of contract processing, it is crucial to deal with the DPIA at an early stage and to obtain information from the manufacturer in order to ensure the often time-consuming implementation before the start of the project. To summarise, the use of AI for automated decision-making or comprehensive personal evaluation generally makes a DPIA necessary.

Newsletter registration

Erasure concepts in the AI era: implementing the right to be forgotten

The processing of personal data by AI systems requires well thought-out erasure concepts. A key challenge is that AI models often use complex and distributed storage systems, including cloud services, which can make it difficult to localise and delete data. An effective erasure concept must therefore take into account the specific storage structures and access mechanisms of these systems.

The right to be forgotten (Art. 17 GDPR) obliges us to delete personal data under certain conditions. AI systems must be able to fulfil such requests efficiently and completely. Automated erasure mechanisms that remove data once its relevance has expired or upon request can play an important role here.

It is also essential to document the entire deletion process. This serves as proof of compliance with data protection regulations and enables accountability to supervisory authorities and data subjects.

Can artificial intelligence support compliance with the GDPR?

A solution-orientated approach could be to see data protection not as an obstacle to innovation, but as an integral part of the development and implementation of AI systems. Privacy by design and privacy by default are important keywords here. By adopting a risk-based approach, companies can focus their resources on the AI applications that harbour the greatest data protection risks. Many GDPR-related tasks are repetitive and time-consuming, which is where AI can provide valuable support.

AI support for repetitive data protection tasks

  • Document analysisAI systems can quickly analyse large volumes of documents (e.g. contracts, guidelines, data protection declarations) to identify relevant data protection provisions and check for consistency.
  • Management of data subject rightsThe processing of requests from data subjects (information, correction, deletion, etc.) can be automated by AI-supported workflows, from the identification of the request to the provision of the information.
  • Creation and updating of processing directoriesAI can help to visualise data flows and extract information for the record of processing activities (RPA) and keep it up to date.

AI-supported monitoring of GDPR compliance:

  • Detection of anomalies: AI-based systems can recognise unusual data access or movements that could indicate potential data breaches. Organisations must report data protection incidents to the competent data protection supervisory authority immediately and at the latest within 72 hours of becoming aware of the breach. An AI-supported early warning system can help organisations meet these deadlines.
  • Compliance checks: AI can automatically check policies and processes for compliance with GDPR requirements and generate alerts in the event of deviations.
  • Assessment of risks: By analysing data processing procedures, AI can recognise patterns and identify potential data protection risks at an early stage to enable preventative measures to be taken.

Analysing data flows using artificial intelligence:

  • Visualisation of data streams: AI tools can visualise complex data flows automatically and thus increase traceability and transparency.
  • Identification of data leaks: By analysing network activities and data movements, AI can help to identify potential data leaks at an early stage and initiate countermeasures.

Monitoring the use of personal data for specific purposes:

  • Analysis of usage patterns: AI systems can analyse whether personal data is being used in accordance with the specified purpose and raise the alarm in the event of misuse.
  • Monitoring of access authorisationsAI can help to monitor access authorisations to personal data and ensure that only authorised persons have access.

Detection of data protection violations by AI:

  • Automated detection of incidents: AI systems can recognise patterns that indicate a data breach (e.g. sudden increase in data access, unusual data transfers).
  • Support in analysing the causes: AI can help to quickly analyse the causes and scope of a data breach in order to initiate appropriate measures to contain and rectify it.
  • Automated reporting processes: KI can assist in the preparation and transmission of reports to the supervisory authorities.

Artificial intelligence and data protection: requirements, recommendations and positions of the supervisory authorities

The German data protection supervisory authorities and the Federal Commissioner for Data Protection and Freedom of Information (BfDI) have also dealt intensively with the challenges and requirements of AI and data protection. They emphasise that the use of AI systems requires compliance with the General Data Protection Regulation (GDPR) and offer various guidance and statements in this regard.

The BfDI for example, has published a statement on "Generative Artificial Intelligence" in which it emphasises the need for AI models to be developed and used in compliance with data protection regulations. In particular, it addresses issues of anonymisation, the legal basis for data processing and the rights of data subjects.

The Data Protection Conference (DSK)the body of independent German federal and state data protection supervisory authorities, has published guidance on AI and data protection. This provides those responsible with practical advice on how they can design AI applications in compliance with data protection regulations and emphasises the importance of transparency, data minimisation and the implementation of data protection impact assessments.

The German state data protection commissioners (LfDIs) have published various statements, position papers and guidance documents on the topic of "Artificial intelligence (AI) and data protection". These documents provide valuable information for the data protection-compliant use of AI systems. Below you will find a selection of these publications.

Court judgements already in force on the subject of artificial intelligence and data protection

Significant judgements, in particular on SCHUFA and scoring under Article 22 GDPR, emphasise strict compliance with the GDPR in AI-supported decision-making, especially in profiling. Automated decisions with legal or significant consequences require particular caution and usually human scrutiny in order to safeguard data protection rights. These judgements highlight current challenges and developments in the area of conflict between AI and data protection law in Germany.

ECJ judgement on SCHUFA scoring

On 7 December 2023, the European Court of Justice (ECJ) ruled that the system practised by SCHUFA Scoring is to be categorised as a fundamentally prohibited automated decision in individual cases in accordance with Article 22 GDPR if this score plays a significant role in decisions on granting credit. This means that decisions based solely on automated processes such as the SCHUFA score that have a significant impact on the data subject are not permitted without additional human scrutiny.

In the same judgement, the ECJ found that SCHUFA's practice of providing information on the granting of a Discharge of residual debt longer than the public insolvency register is not compatible with the GDPR. While the public register retains this information for six months, SCHUFA previously stored it for three years. The ECJ ruled that longer storage by private credit agencies violates the rights of data subjects.

Liability of AI operators for violations of personality rights

The Kiel Regional Court ruled on 29 February 2024 (case no. 6 O 151/23) that operators of AI systems are responsible for violations of personal rights caused by their AI. In this case, an AI system had generated and published untrue information about a company. The court clarified that AI-generated content also makes the operator responsible.

Co-determination rights of the works council in the use of AI

The Hamburg Labour Court ruled on 16 January 2024 (case no.: 24 BVGa 1/24) that the works council has no right of co-determination if employees voluntarily use AI tools such as ChatGPT via private accounts. As the employer had no access to the data collected by the AI operator, the court saw no monitoring pressure and therefore no violation of co-determination rights.

Online course Understanding AI - basics, laws and data protection practice

Do you want to understand the basics of artificial intelligence, keep an eye on current laws and know how to ensure data protection in AI practice? Our online course provides you with the knowledge and practical tools you need to operate confidently in the world of AI and data protection. Discover the course content now and start your training!

Conclusion: Accepting the challenges of AI and data protection and finding solutions

The use of artificial intelligence (AI) is no longer a vision of the future, but is increasingly shaping the everyday life of organisations. The AI Regulation creates an important legal framework for this, but compliance with existing data protection laws such as the GDPR remains essential. It is crucial to understand this: AI is a tool whose data protection implications arise from the specific use case and the purpose pursued.

The integration of AI and data protection therefore requires a holistic approach, from a clear definition of purpose and data protection impact assessment to compliance with the data protection principles of the GDPR, such as data minimisation, transparency and accountability. The AI Regulation itself supplements these requirements with specific obligations, for example when dealing with training data.

Instead of seeing data protection as a barrier to innovation, we should recognise the opportunities that AI offers in overcoming data protection challenges. From automating repetitive compliance tasks to detecting data breaches, AI can be a valuable ally.

The latest court judgements, in particular on SCHUFA scoring, underline the need for careful consideration when using AI in relation to automated decisions and profiling. They call for the rights of data subjects to be protected and for human control to be ensured in critical decision-making processes.

The future lies in the intelligent combination of AI and data protection. By addressing the legal framework at an early stage, consistently applying data protection principles and utilising the potential of AI to support compliance, we can create innovative solutions that focus on both progress and the protection of personal data. We need to accept the challenges and work together to find ways in which AI and data protection can go hand in hand in practice.

Caroline Schwabe
Latest posts by Caroline Schwabe (see all)

This might interest you too:

artificial intelligence

AI REGULATION: Regulation of artificial intelligence

Find out all about the EU and German AI regulation: current status, legal requirements and effects.

Erasure concept according to the GDPR

Samples, templates and examples for your GDPR erasure concept according to DIN 66398. Automatically create the erasure concept.

Personal data

What are personal data in data protection? What must be observed when processing in accordance with GDPR?