The EU AI Act entered into force on August 1, 2024. Most of its provisions apply from August 2, 2026. That gives Polish companies roughly five months from the time I am writing this. Five months to classify every AI system they operate, assess risk levels, implement documentation requirements, and establish human oversight mechanisms. Most have not started.
I know this because I work with Polish organizations deploying AI. When I ask about AI Act readiness, I get one of three responses: confusion about whether the regulation applies to them, vague plans to "look into it next quarter," or the assumption that their legal team will handle it. None of these are strategies. The AI Act is not a legal checkbox exercise. It requires technical changes, organizational processes, and documented evidence that both exist and function.
Here is what you actually need to know and do.
The AI Act is not a legal checkbox exercise. It requires technical changes, organizational processes, and documented evidence that both exist and function.
What the AI Act Actually Regulates
The AI Act regulates AI systems, not AI companies. This distinction matters. If you are a bank using an AI-powered credit scoring model built by a third-party vendor, you are still subject to the regulation. You are a "deployer" under the Act, and deployers have obligations. The vendor is a "provider," and they have different obligations. Both are responsible.
The regulation defines an "AI system" broadly: any machine-based system that operates with some level of autonomy and generates outputs like predictions, recommendations, decisions, or content that can influence physical or virtual environments. That definition covers far more than most people realize. Your chatbot, your document classifier, your anomaly detection pipeline, your automated email categorization system -- all of these are AI systems under the Act.
A common misconception among Polish companies: "We do not develop AI, we just use tools." The Act does not care about that distinction. If you deploy an AI system that processes data about EU residents, the Act applies to you. Period.
The Risk Categories: Where Does Your System Land
The AI Act uses a risk-based approach with four tiers. Getting the classification right is the first and most consequential step. Everything else -- documentation, monitoring, transparency -- follows from which category your systems fall into.
Unacceptable Risk (Banned)
These AI applications are prohibited entirely from February 2, 2025. They include social scoring systems used by public authorities, real-time biometric identification in public spaces (with narrow exceptions for law enforcement), emotion recognition in workplaces and educational institutions, and AI systems that manipulate behavior through subliminal techniques.
If you are reading this and thinking "that does not apply to us," you are probably right. But verify. I encountered a Polish HR tech company that used sentiment analysis on employee video calls as part of their engagement scoring. That falls uncomfortably close to the workplace emotion recognition ban. They had to remove the feature entirely.
High Risk (Heavy Obligations)
This is where most of the compliance work lives. High-risk AI systems include those used in: critical infrastructure management, education and vocational training (grading, admissions), employment (recruitment, task allocation, performance evaluation, promotion decisions), access to essential services (credit scoring, insurance pricing, emergency dispatch), law enforcement, migration and border control, and administration of justice.
If your AI system makes or materially influences decisions about people's access to jobs, credit, education, insurance, or public services, it is almost certainly high-risk. For Polish companies in financial services and insurance -- which is where I do most of my work -- this means credit scoring models, claims assessment automation, fraud detection systems, and automated underwriting all land in the high-risk category.
High-risk systems must comply with requirements that include: a documented risk management system maintained throughout the system's lifecycle, data governance standards covering training, validation, and testing datasets, technical documentation sufficient for a third party to assess compliance, automatic logging of system operation, transparency to deployers through clear instructions for use, human oversight measures, and adequate levels of accuracy, resilience, and cybersecurity.
For Polish financial services companies, this means credit scoring, claims automation, fraud detection, and automated underwriting are all high-risk under the AI Act. If your system influences decisions about people, it is probably here.
Limited Risk (Transparency Only)
Systems that interact directly with people but do not fall into the high-risk category. The primary obligation here is transparency: users must be informed they are interacting with an AI system. If your chatbot handles customer support inquiries, you must disclose that it is AI-powered. If you generate synthetic content (deepfakes, AI-generated images or text presented as human-created), it must be labeled as AI-generated.
This applies to most customer-facing chatbots, content generation tools, and AI assistants used in Polish businesses.
Minimal Risk (No Specific Obligations)
AI systems that pose minimal or no risk -- spam filters, AI-powered search within internal tools, optimization algorithms for logistics. No specific obligations under the Act, though voluntary codes of conduct are encouraged.
The Timeline: What Is Already in Force
The rollout is staggered, and some deadlines have already passed:
February 2, 2025 (already in force): Prohibitions on unacceptable risk AI systems. AI literacy obligations for organizations deploying AI systems.
August 2, 2025 (five months away from this writing): Obligations for general-purpose AI (GPAI) models take effect. This applies to foundation model providers like OpenAI, Google, and Anthropic, but also to companies fine-tuning these models for specific applications. Governance structures and codes of practice for GPAI models must be in place.
August 2, 2026 (the main deadline): Full application of all remaining provisions, including all requirements for high-risk AI systems. This is the date most Polish companies need to work toward.
August 2, 2027: Extended deadline for high-risk AI systems that are components of regulated products (medical devices, vehicles, aviation equipment). These get an additional year because they are already subject to sector-specific conformity assessments.
The AI literacy requirement from February 2025 is worth emphasizing because most organizations I talk to missed it. Article 4 requires that providers and deployers of AI systems ensure their staff have "sufficient AI literacy." This is not optional. It is already in force. If your organization uses AI systems and has not provided structured training to the people operating or overseeing those systems, you are already non-compliant.
I cover practical approaches to AI literacy programs in our team training workshops. The training does not need to be elaborate, but it does need to be documented.
Penalties: Why This Is Not Theoretical
The AI Act has teeth. Fines are structured in three tiers based on the severity of the violation:
- Up to 35 million EUR or 7% of global annual turnover for prohibited AI practices
- Up to 15 million EUR or 3% of turnover for violations of high-risk requirements
- Up to 7.5 million EUR or 1.5% of turnover for supplying incorrect information to authorities
For SMEs and startups, the fines are capped at the lower of the percentage or the fixed amount. But even the reduced amounts are substantial enough to threaten a mid-size Polish company's viability.
Beyond fines, non-compliance creates business risk. Companies that cannot demonstrate compliance will face procurement barriers -- large enterprises and public sector entities are already adding AI Act compliance to vendor questionnaires. If you sell AI-powered products or services to European companies, compliance becomes a competitive requirement, not just a regulatory one.
The Polish Context: UOKiK and National Implementation
Poland is designating UOKiK (the Office of Competition and Consumer Protection) as the national competent authority for AI Act enforcement. The Chancellery of the Prime Minister coordinates national AI policy through the AI strategy ("Policy for the Development of Artificial Intelligence in Poland from 2020").
The practical implications for Polish companies: UOKiK already has enforcement experience with consumer protection and competition law. They will apply similar investigative approaches to AI Act compliance. Expect audits, complaints-driven investigations, and spot checks, particularly in sectors where AI directly affects consumers -- financial services, telecommunications, e-commerce.
Polish companies also need to monitor the implementing acts and standards that the European Commission is still developing. The AI Office in Brussels is working on harmonized standards through CEN and CENELEC. These standards will provide specific technical benchmarks for requirements like accuracy, resilience, and documentation. Until they are finalized, companies should follow the guidelines published by the AI Office and work toward the intent of the regulation, not wait for pixel-perfect specifications.
Waiting for perfect guidance before starting compliance work is a common mistake. The regulation is final. The requirements are clear. Start now; refine as standards emerge.
Practical Compliance Checklist for Polish Companies
I have distilled this into the concrete steps I walk through with the organizations I advise. This is not legal advice -- consult a lawyer for your specific situation. But this is the technical and organizational work that needs to happen.
Step 1: AI System Inventory (Do This First)
Create a comprehensive register of every AI system your organization uses, develops, or deploys. For each system, document: what it does, what data it processes, who it affects, who operates it, who provided it, and when it was deployed.
Most organizations undercount their AI systems by 40-60% on the first pass. People forget about the ML model embedded in their CRM, the AI-powered feature in their analytics platform, the automated recommendation engine their marketing team activated two years ago. Be thorough.
Step 2: Risk Classification
For each system in your inventory, determine the risk category. Use the classification criteria in Article 6 and Annexes I and III of the Act. When in doubt, classify higher -- it is easier to downgrade later than to explain to a regulator why you classified a high-risk system as limited risk.
Pay particular attention to systems that process personal data of Polish citizens, systems that influence decisions about employment or financial services, and systems that interact directly with consumers.
Step 3: Gap Analysis for High-Risk Systems
For each high-risk system, assess your current state against the requirements in Articles 8-15. The key questions:
Risk management (Art. 9): Do you have a documented risk management process for this system? Is it maintained and updated? Does it cover risks to health, safety, and fundamental rights?
Data governance (Art. 10): Can you demonstrate that training data was relevant, representative, and free from errors? Do you have documentation of data provenance and preparation?
Technical documentation (Art. 11): Could a competent authority assess your system's compliance from your documentation alone? This is the bar.
Logging (Art. 12): Does your system automatically log its operations at a level sufficient for post-deployment monitoring? Can you trace a specific decision back to the inputs and model state that produced it?
Transparency (Art. 13): Do deployers have clear instructions for use? Do they understand the system's capabilities, limitations, and intended purpose?
Human oversight (Art. 14): Can a human effectively monitor the system's operation? Can they intervene, override, or stop the system when necessary? Is this person adequately trained?
Accuracy and resilience (Art. 15): Have you tested the system against adversarial inputs, data drift, and edge cases? Are accuracy metrics documented and monitored?
Step 4: Implement Missing Controls
Based on your gap analysis, implement the controls you are missing. This is the engineering work. For most organizations, the biggest gaps are in documentation (they built it but did not write it down), logging (the system works but does not record why it made specific decisions), and human oversight (there is a theoretical human in the loop, but in practice nobody reviews the system's outputs).
Step 5: Establish Ongoing Monitoring
Compliance is not a one-time project. High-risk systems require continuous monitoring for accuracy degradation, data drift, emerging risks, and incidents. Build dashboards, set alerts, schedule regular reviews. Document everything.
Step 6: Prepare for Conformity Assessment
High-risk systems must undergo a conformity assessment before being placed on the market or put into service. For most AI systems, this is a self-assessment based on internal procedures. For certain biometric and critical infrastructure systems, it requires a third-party notified body assessment.
Prepare your conformity assessment documentation now. Even if the full requirements apply from August 2026, the assessment process takes time, and rushing it leads to gaps.
Compliance is not a one-time project. Build monitoring, documentation, and review cycles into your AI operations now. Retrofitting them later costs three times as much.
What I See Polish Companies Getting Wrong
Three patterns keep recurring across the organizations I work with.
Treating it as a legal project. AI Act compliance requires legal input, but the work is primarily technical and organizational. Your legal team cannot write your risk management system, implement logging, or establish human oversight mechanisms. This needs engineering leadership alongside legal guidance. I advise organizations on building this cross-functional governance capability because it does not exist naturally in most Polish companies.
Waiting for implementing acts. Yes, some technical standards are still being developed. No, that does not mean you should wait. The high-level requirements are final. Risk management, documentation, logging, human oversight -- these are not going to change. The standards will specify benchmarks and methods, but the obligations are fixed. Start with the obligations, refine with the standards.
Ignoring vendor responsibility. If you deploy an AI system built by a third party, you are still responsible for compliance as a deployer. "Our vendor said it is compliant" is not a compliance strategy. Verify. Request documentation. Include AI Act compliance requirements in your procurement contracts. If your vendor cannot provide technical documentation, logging capabilities, and accuracy metrics, that is a red flag.
The Five-Month Sprint
If you are starting from zero with five months until the August 2026 deadline, here is a realistic prioritization.
Month 1: Complete AI system inventory and risk classification. This is the foundation -- everything else depends on knowing what you have and where it falls.
Month 2: Gap analysis for all high-risk systems. Identify the delta between current state and required state. Prioritize gaps by severity.
Month 3-4: Implement missing controls. Focus on documentation, logging, and human oversight first -- these are the most common gaps and take the most time.
Month 5: Conformity assessment preparation, final review, and staff training. Ensure everyone who operates or oversees AI systems understands their responsibilities under the Act.
This timeline is tight. It assumes dedicated resources and management attention. If your organization deploys more than five high-risk AI systems, you likely need to start in parallel rather than sequentially. The organizations that struggle are not the ones with the most complex AI -- they are the ones that started too late.
Five months is not comfortable. But it is enough if you start now. The organizations that wait for the "right time" or the "final guidance" will discover that August 2026 arrives whether they are ready or not.
Damian Krawcewicz
AI strategy consultant and practitioner. 20 years in engineering, currently leading AI adoption for 100+ engineers.
Learn more about DamianNeed help building your AI strategy?
Explore AI strategy advisoryRelated articles
AI Funding and Grants for Polish Companies in 2026: A Practical Overview
Poland offers several public funding programs for AI adoption in 2026, from KFS training grants to EU co-financed innovation paths. An overview of the landscape, with a pointer to the full Polish-language guide.
AI Adoption Strategy: A Framework for Mid-Size Organizations
A practical framework for mid-size organizations planning AI adoption, covering readiness assessment, governance structure, pilot selection, and scaling from experiment to production.
Why AI Governance Is Not the Enemy of Innovation
AI governance often gets framed as bureaucracy that slows teams down. In practice, the organizations that govern AI well ship faster and with fewer disasters. Here is why.