{"id":714822,"date":"2026-04-16T01:13:51","date_gmt":"2026-04-15T23:13:51","guid":{"rendered":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/?p=714822"},"modified":"2026-04-16T09:32:56","modified_gmt":"2026-04-16T07:32:56","slug":"openai-chatgpt-5-4-cyber-en-anthropic-claude-gaan-de-strijd-aan-met-nieuwe-cybermodellen","status":"publish","type":"post","link":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/openai-chatgpt-5-4-cyber-en-anthropic-claude-gaan-de-strijd-aan-met-nieuwe-cybermodellen\/","title":{"rendered":"OpenAI (ChatGPT 5.4 Cyber) en Anthropic Claude gaan de strijd aan met nieuwe cybermodellen"},"content":{"rendered":"<div id=\"attachment_714823\" style=\"width: 1118px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-714823\" class=\"wp-image-714823 size-full\" src=\"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62.png\" alt=\"OpenAI (ChatGPT 5.4 Cyber) en Anthropic Claude gaan de strijd aan met nieuwe cybermodellen\" width=\"1108\" height=\"324\" title=\"\" srcset=\"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62.png 1108w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-700x205.png 700w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-1024x299.png 1024w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-250x73.png 250w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-768x225.png 768w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-720x211.png 720w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-520x152.png 520w, https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-content\/uploadsnieuwssocial\/2026\/04\/img_69dfc6a582a62-320x94.png 320w\" sizes=\"auto, (max-width: 1108px) 100vw, 1108px\" \/><p id=\"caption-attachment-714823\" class=\"wp-caption-text\">OpenAI (ChatGPT 5.4 Cyber) en Anthropic Claude gaan de strijd aan met nieuwe cybermodellen.<\/p><\/div>\n<p data-start=\"173\" data-end=\"620\"><strong>De concurrentie tussen AI-bedrijven wordt steeds heviger, en dat blijkt opnieuw uit de nieuwste ontwikkelingen in de wereld van kunstmatige intelligentie. OpenAI heeft namelijk zijn nieuwe model <a href=\"https:\/\/www.newsbytesapp.com\/news\/science\/openai-unveils-gpt-5-4-cyber-to-rival-anthropic-claude-mythos\/story\" target=\"_blank\" rel=\"noopener\">GPT-5.4 Cyber<\/a> aangekondigd, slechts korte tijd nadat Anthropic zijn eigen geavanceerde cybermodel Claude Mythos presenteerde. Daarmee zetten beide bedrijven vol in op een nieuwe markt: AI-systemen die speciaal ontworpen zijn voor cybersecurity.\u00a0Volgens OpenAI is GPT-5.4 Cyber ontwikkeld om beveiligingsonderzoekers en experts te ondersteunen bij het opsporen van kwetsbaarheden in software, het analyseren van beveiligingslekken en het verbeteren van digitale verdediging. Het model zou verder gaan dan eerdere AI-versies door beter technische context te begrijpen en complexere beveiligingsproblemen te kunnen analyseren.<\/strong><\/p>\n<p data-start=\"1002\" data-end=\"1139\">OpenAI benadrukt daarbij dat het model bedoeld is voor legitiem en defensief gebruik binnen cybersecurity. In de woorden van het bedrijf:<\/p>\n<blockquote data-start=\"1141\" data-end=\"1232\">\n<p data-start=\"1143\" data-end=\"1232\"><em data-start=\"1143\" data-end=\"1223\">\u201cGPT-5.4-Cyber lowers the refusal boundary for legitimate cybersecurity work.\u201d<\/em> \u2013 OpenAI<\/p>\n<\/blockquote>\n<p data-start=\"1234\" data-end=\"1465\">Met die uitspraak geeft OpenAI aan dat het model minder snel beveiligingsgerelateerde verzoeken zal weigeren wanneer deze bedoeld zijn voor ethisch onderzoek en bescherming, iets waar eerdere AI-modellen vaak terughoudend in waren.\u00a0De aankondiging van GPT-5.4 Cyber lijkt een directe reactie op Anthropic, dat recent veel aandacht trok met Claude Mythos. Dat model werd gepresenteerd als een zeer krachtig cyber-AI-systeem en zou volgens berichten zelfs duizenden mogelijke zero-day kwetsbaarheden hebben ge\u00efdentificeerd tijdens tests. Daarmee positioneert Anthropic Claude nadrukkelijk als een elite-tool voor geavanceerde beveiligingsoperaties.<\/p>\n<blockquote>\n<p data-start=\"1887\" data-end=\"1990\">Hoewel beide modellen op hetzelfde domein gericht zijn, kiezen de bedrijven wel een iets andere aanpak:<\/p>\n<\/blockquote>\n<h3 data-section-id=\"1unr8yy\" data-start=\"1992\" data-end=\"2034\">Vergelijking: Claude vs ChatGPT\/OpenAI<\/h3>\n<ul data-start=\"2036\" data-end=\"2477\">\n<li data-section-id=\"arghb2\" data-start=\"2036\" data-end=\"2235\"><strong data-start=\"2038\" data-end=\"2067\">Claude Mythos (Anthropic)<\/strong>\n<ul data-start=\"2070\" data-end=\"2235\">\n<li data-section-id=\"wavd2o\" data-start=\"2070\" data-end=\"2117\">Gericht op elite cybersecurity-toepassingen<\/li>\n<li data-section-id=\"w5s3x\" data-start=\"2120\" data-end=\"2177\">Zou duizenden zero-day kwetsbaarheden hebben gevonden<\/li>\n<li data-section-id=\"k7z79b\" data-start=\"2180\" data-end=\"2235\">Zeer beperkte\/private toegang via Project Glasswing<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"v9yiw6\" data-start=\"2237\" data-end=\"2477\"><strong data-start=\"2239\" data-end=\"2273\">GPT-5.4 Cyber (OpenAI\/ChatGPT)<\/strong>\n<ul data-start=\"2276\" data-end=\"2477\">\n<li data-section-id=\"pgndpz\" data-start=\"2276\" data-end=\"2321\">Ook gebouwd voor defensieve cybersecurity<\/li>\n<li data-section-id=\"1fkk3by\" data-start=\"2324\" data-end=\"2395\">Beschikbaar voor geselecteerde experts via Trusted Access for Cyber<\/li>\n<li data-section-id=\"shet1l\" data-start=\"2398\" data-end=\"2477\">Meer focus op bredere, gecontroleerde toegang voor beveiligingsonderzoekers.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2 data-start=\"76\" data-end=\"103\">Toegang \/ Beschikbaarheid<\/h2>\n<ul data-start=\"104\" data-end=\"536\">\n<li data-section-id=\"10xpri3\" data-start=\"104\" data-end=\"338\">OpenAI GPT-5.4-Cyber\n<ul data-start=\"131\" data-end=\"338\">\n<li data-section-id=\"17owuec\" data-start=\"131\" data-end=\"241\">Beschikbaar voor grotere groep geverifieerde security professionals via Trusted Access for Cyber-programma<\/li>\n<li data-section-id=\"1ew3if4\" data-start=\"244\" data-end=\"279\">Meer schaalbare uitrolstrategie<\/li>\n<li data-section-id=\"1fycft3\" data-start=\"282\" data-end=\"338\">Gericht op bredere adoptie binnen cybersecurityteams<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"133dva1\" data-start=\"340\" data-end=\"536\">Anthropic Claude Mythos\n<ul data-start=\"370\" data-end=\"536\">\n<li data-section-id=\"qa2h5e\" data-start=\"370\" data-end=\"398\">Zeer beperkt beschikbaar<\/li>\n<li data-section-id=\"1ex6mdv\" data-start=\"401\" data-end=\"476\">Alleen voor selecte partners \/ elite-organisaties via Project Glasswing<\/li>\n<li data-section-id=\"1biurtx\" data-start=\"479\" data-end=\"536\">Bewust exclusiever gehouden vanwege veiligheidszorgen<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"538\" data-end=\"562\">Strategische Filosofie<\/h3>\n<ul data-start=\"563\" data-end=\"865\">\n<li data-section-id=\"1yd5nux\" data-start=\"563\" data-end=\"717\">OpenAI\n<ul data-start=\"576\" data-end=\"717\">\n<li data-section-id=\"8yv9up\" data-start=\"576\" data-end=\"658\">Meer \u201ccontrolled openness\u201d: krachtige tools beschikbaar maken mits verificatie<\/li>\n<li data-section-id=\"u2lz6k\" data-start=\"661\" data-end=\"717\">Gelooft in iteratieve deployment en praktijkfeedback<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"8ofxe\" data-start=\"719\" data-end=\"865\">Anthropic\n<ul data-start=\"735\" data-end=\"865\">\n<li data-section-id=\"1lct2zw\" data-start=\"735\" data-end=\"803\">Meer \u201csafety first \/ restrict first\u201d: eerst beperken, dan testen<\/li>\n<li data-section-id=\"1uckg7m\" data-start=\"806\" data-end=\"865\">Zet zwaar in op preventie van misuse v\u00f3\u00f3r brede release<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"867\" data-end=\"888\">Cybersecurity Focus<\/h3>\n<ul data-start=\"889\" data-end=\"1221\">\n<li data-section-id=\"zaukya\" data-start=\"889\" data-end=\"1038\">GPT-5.4-Cyber\n<ul data-start=\"909\" data-end=\"1038\">\n<li data-section-id=\"15tq0gd\" data-start=\"909\" data-end=\"957\">Speciaal getuned voor vulnerability research<\/li>\n<li data-section-id=\"16kmz8p\" data-start=\"960\" data-end=\"979\">Malware-analyse<\/li>\n<li data-section-id=\"ef95vi\" data-start=\"982\" data-end=\"1012\">Binary reverse engineering<\/li>\n<li data-section-id=\"bmvcoz\" data-start=\"1015\" data-end=\"1038\">Threat intelligence<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"14b9ais\" data-start=\"1040\" data-end=\"1221\">Claude Mythos\n<ul data-start=\"1060\" data-end=\"1221\">\n<li data-section-id=\"1ofstgf\" data-start=\"1060\" data-end=\"1133\">Vooral ontworpen voor autonoom ontdekken van kritieke vulnerabilities<\/li>\n<li data-section-id=\"nv6azm\" data-start=\"1136\" data-end=\"1169\">Large-scale software auditing<\/li>\n<li data-section-id=\"17ikyog\" data-start=\"1172\" data-end=\"1221\">Exploit-identificatie op infrastructuurniveau<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"1223\" data-end=\"1249\">Technische Positionering<\/h3>\n<ul data-start=\"1250\" data-end=\"1534\">\n<li data-section-id=\"1ckwsrc\" data-start=\"1250\" data-end=\"1380\">GPT-5.4-Cyber\n<ul data-start=\"1270\" data-end=\"1380\">\n<li data-section-id=\"uwngn2\" data-start=\"1270\" data-end=\"1314\">Lijkt een fine-tuned variant van GPT-5.4<\/li>\n<li data-section-id=\"1gw099p\" data-start=\"1317\" data-end=\"1380\">Gespecialiseerd maar gebaseerd op bestaand foundation model<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"1a3o4e9\" data-start=\"1382\" data-end=\"1534\">Claude Mythos\n<ul data-start=\"1402\" data-end=\"1534\">\n<li data-section-id=\"4x9dw8\" data-start=\"1402\" data-end=\"1470\">Wordt gepositioneerd als nieuw frontier model \/ next-gen systeem<\/li>\n<li data-section-id=\"65tg8v\" data-start=\"1473\" data-end=\"1534\">Door Anthropic behandeld als uitzonderlijk krachtig model<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"1536\" data-end=\"1553\">Risicoperceptie<\/h3>\n<ul data-start=\"1554\" data-end=\"1814\">\n<li data-section-id=\"1n7gza5\" data-start=\"1554\" data-end=\"1640\">GPT-5.4-Cyber\n<ul data-start=\"1574\" data-end=\"1640\">\n<li data-section-id=\"1piykvt\" data-start=\"1574\" data-end=\"1640\">OpenAI ziet huidig risico als beheersbaar met toegangscontrole<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"101ccid\" data-start=\"1642\" data-end=\"1814\">Claude Mythos\n<ul data-start=\"1662\" data-end=\"1814\">\n<li data-section-id=\"i3eza0\" data-start=\"1662\" data-end=\"1749\">Anthropic noemt het model potentieel zo krachtig dat publieke release te riskant is<\/li>\n<li data-section-id=\"1ec6vd7\" data-start=\"1752\" data-end=\"1814\">Sommige media noemen het \u201cte gevaarlijk voor open release\u201d<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"1816\" data-end=\"1827\">Doelgroep<\/h3>\n<ul data-start=\"1828\" data-end=\"2094\">\n<li data-section-id=\"ebv59m\" data-start=\"1828\" data-end=\"1942\">GPT-5.4-Cyber\n<ul data-start=\"1848\" data-end=\"1942\">\n<li data-section-id=\"1r7g2mx\" data-start=\"1848\" data-end=\"1861\">SOC teams<\/li>\n<li data-section-id=\"1cdtjc1\" data-start=\"1864\" data-end=\"1878\">Pentesters<\/li>\n<li data-section-id=\"vteeyn\" data-start=\"1881\" data-end=\"1891\">MSSP\u2019s<\/li>\n<li data-section-id=\"1ffusa2\" data-start=\"1894\" data-end=\"1918\">Security consultants<\/li>\n<li data-section-id=\"qa7p9f\" data-start=\"1921\" data-end=\"1942\">Bedrijfsdefenders<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"1aw925\" data-start=\"1944\" data-end=\"2094\">Claude Mythos\n<ul data-start=\"1964\" data-end=\"2094\">\n<li data-section-id=\"1otvkos\" data-start=\"1964\" data-end=\"1998\">Grote enterprise security labs<\/li>\n<li data-section-id=\"1471eff\" data-start=\"2001\" data-end=\"2017\">Techgiganten<\/li>\n<li data-section-id=\"1niryyj\" data-start=\"2020\" data-end=\"2056\">Banken \/ kritieke infrastructuur<\/li>\n<li data-section-id=\"mfku2f\" data-start=\"2059\" data-end=\"2094\">Nationale cybersecuritypartners<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 data-start=\"2096\" data-end=\"2116\">Marktpositionering<\/h3>\n<ul data-start=\"2117\" data-end=\"2274\">\n<li data-section-id=\"15kuq2j\" data-start=\"2117\" data-end=\"2192\">OpenAI\n<ul data-start=\"2130\" data-end=\"2192\">\n<li data-section-id=\"bijqrk\" data-start=\"2130\" data-end=\"2192\">Wil sneller marktaandeel winnen in commerci\u00eble cybersector<\/li>\n<\/ul>\n<\/li>\n<li data-section-id=\"vdo7t5\" data-start=\"2194\" data-end=\"2274\">Anthropic\n<ul data-start=\"2210\" data-end=\"2274\">\n<li data-section-id=\"11ju0qi\" data-start=\"2210\" data-end=\"2274\">Positioneert zich als premium\/high-end\/security-first speler<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h1>Een deepdive video:<\/h1>\n<p>&nbsp;<\/p>\n<div class=\"video-container\"><iframe loading=\"lazy\" title=\"GPT-5.4 Deep Dive: Computer Use, 1M Context &amp; AI Agents Explained\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/B0jGWUwYA9I?feature=oembed&#038;wmode=opaque\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h1>Het persbericht van OpenAI:<\/h1>\n<p>&nbsp;<\/p>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>We are scaling up our Trusted Access for Cyber (TAC) program to thousands of verified individual defenders and hundreds of teams responsible for defending critical software. For years, we\u2019ve been building a cyber defense program on the principles of democratized access, iterative deployment, and ecosystem resilience. In preparation for increasingly more capable models from OpenAI over the next few months, we are fine-tuning our models specifically to enable defensive cybersecurity use cases, starting today with a variant of GPT\u20115.4 trained to be cyber-permissive: GPT\u20115.4\u2011Cyber. In this post, we share how we expect our approach of scaling cyber defense in lockstep with increasing model capabilities to guide the testing and deployment of future releases.<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>The progressive use of AI accelerates defenders \u2013 those responsible for keeping systems, data, and users safe \u2013 enabling them to find and fix problems faster in the digital infrastructure everyone relies on. Similarly, AI is being\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/global-affairs\/disrupting-malicious-uses-of-ai-october-2025\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">used<\/u>\u2060<\/a>\u00a0by attackers looking to cause harm. We&#8217;ve been preparing for this. Since 2023, we&#8217;ve supported defenders through our\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/openai-cybersecurity-grant-program\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Cybersecurity Grant Program<\/u>\u2060<\/a>\u00a0and strengthened safeguards through our\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/updating-our-preparedness-framework\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Preparedness Framework<\/u>\u2060<\/a>. The same year, we started evaluating our models&#8217; cyber capabilities, and in 2025, we began including\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/deploymentsafety.openai.com\/gpt-5-3-codex\/cybersecurity\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">cyber-specific safeguards<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0in our\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/introducing-gpt-5-2\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">model deployments<\/u>\u2060<\/a>. Earlier this year, we furthered our support for defenders with the launch of\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/codex-security-now-in-research-preview\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Codex Security<\/u>\u2060<\/a>\u00a0to identify and fix vulnerabilities at scale. Our approach to this continuous advancement of capabilities is guided by three principles:<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<ul class=\"mb-md marker:text-inherit last:mb-0 in-[:where(ul,ol)]:mt-2 list-disc in-[:where(ul,ol)]:list-[circle] ps-2xs mx-3xs\">\n<li class=\"mb-4xs\"><em><b>Democratized access:\u00a0<\/b>Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn\u2019t. That means using clear, objective criteria and methods \u2013 such as strong KYC and identity verification \u2013 to guide\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/trusted-access-for-cyber\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">who can access<\/u>\u2060<\/a>\u00a0more advanced capabilities and automating these processes over time. Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.<\/em><\/li>\n<li class=\"mb-4xs\"><em><b>Iterative deployment:<\/b>\u00a0We learn the most by\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/safety\/how-we-think-about-safety-alignment\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">putting these systems into the world carefully<\/u>\u2060<\/a>\u00a0and improving them over time. As we better understand both their capabilities and risks, we update our models and safety systems accordingly. This includes understanding the differentiated benefits and risks of specific models, improving resilience to jailbreaks and other adversarial attacks, and improving defensive capabilities \u2014 while mitigating harms.\u00a0<\/em><\/li>\n<li class=\"mb-4xs\"><em><b>Investing in ecosystem resilience:<\/b>\u00a0We support and accelerate the community of defenders through trusted access pathways, targeted\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/openai-cybersecurity-grant-program\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">grants<\/u>\u2060<\/a>, contributions to\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/www.linuxfoundation.org\/press\/linux-foundation-announces-12.5-million-in-grant-funding-from-leading-organizations-to-advance-open-source-security\" target=\"_blank\" rel=\"noopener noreferrer\"><u class=\"decoration-1 underline-offset-4\">open-source security initiatives<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>, and technologies like\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/codex-security-now-in-research-preview\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Codex Security<\/u>\u2060<\/a>\u00a0that help defenders more rapidly find and patch vulnerabilities.\u00a0<\/em><\/li>\n<\/ul>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em><b>Our strategy for cybersecurity resilience and defensive acceleration<\/b><\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>For years, our cybersecurity strategy has been to invest in research, prevent misuse, and accelerate defenders. As model capabilities have advanced, we have expanded our programs toward these goals, which are grounded in the following convictions:\u00a0<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<ul class=\"mb-md marker:text-inherit last:mb-0 in-[:where(ul,ol)]:mt-2 list-disc in-[:where(ul,ol)]:list-[circle] ps-2xs mx-3xs\">\n<li class=\"mb-4xs\"><em><b>Cyber risk is already here and accelerating, but we can act.\u00a0<\/b>Digital infrastructure has already\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/www.cisa.gov\/news-events\/alerts\/2017\/05\/12\/indicators-associated-wannacry-ransomware\" target=\"_blank\" rel=\"noopener noreferrer\"><u class=\"decoration-1 underline-offset-4\">been vulnerable<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0for years, before advanced AI even came along. Now, existing models can help find vulnerabilities, reason across codebases, and support meaningful parts of the cyber workflow, and threat actors are experimenting with novel AI-driven approaches. We\u2019ve seen sophisticated harnesses elicit stronger and stronger capabilities by using more test-time compute with existing models. That means safeguards cannot wait for a single future threshold.<\/em><\/li>\n<li class=\"mb-4xs\"><em><b>Expand access based on who is using these systems and how they\u2019re being used.\u00a0<\/b>Cyber capabilities are inherently dual-use, so risk isn\u2019t defined by the model alone. It also depends on the user, the\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/developers.openai.com\/codex\/concepts\/cyber-safety\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">trust signals<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0around them, and the level of access they\u2019re given.<\/em>\n<ul class=\"mb-md marker:text-inherit last:mb-0 in-[:where(ul,ol)]:mt-2 list-disc in-[:where(ul,ol)]:list-[circle] ps-2xs mx-3xs\">\n<li class=\"mb-4xs\"><em>Broad access to general models with safeguards can coexist with more granular controls for higher-risk capabilities, supported by stronger verification, clearer signals of intent, and better visibility into use.<\/em><\/li>\n<li class=\"mb-4xs\"><em>To enable responsible use at scale, we need systems that can validate trustworthy users and use cases in more automated and more objective ways. This allows us to expand access based on evidence and real signals of trust, rather than relying on manual decisions. We don\u2019t think it\u2019s practical or appropriate to centrally decide who gets to defend themselves. Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.<\/em><\/li>\n<\/ul>\n<\/li>\n<li class=\"mb-4xs\"><em><b>Defenses should be continually scaled with capability.<\/b>\u00a0As model capabilities increase, defenses need to scale alongside them. We\u2019ve seen steady improvements in agentic coding, which have direct implications for cybersecurity and we\u2019ve adapted our approach in step.<\/em>\n<ul class=\"mb-md marker:text-inherit last:mb-0 in-[:where(ul,ol)]:mt-2 list-disc in-[:where(ul,ol)]:list-[circle] ps-2xs mx-3xs\">\n<li class=\"mb-4xs\"><em>We began cyber-specific safety training with GPT\u20115.2, then expanded it with additional safeguards through GPT\u20115.3\u2011Codex and GPT\u20115.4, where we also classified the model as \u201chigh\u201d cyber capability under our Preparedness Framework. In parallel, we increased support for defenders: launching a\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/form\/cybersecurity-grant-program\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">$10M Cybersecurity Grant Program<\/u>\u2060<\/a>, reached over 1,000 open source projects with\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/developers.openai.com\/community\/codex-for-oss\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Codex for Open Source<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0which provides free security scanning, and continued to improve Codex Security.<\/em><\/li>\n<li class=\"mb-4xs\"><em>Codex Security, which launched in private beta six months ago, and as a research preview\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/codex-security-now-in-research-preview\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">earlier this year<\/u>\u2060<\/a>, automatically monitors codebases, validates issues, and proposes fixes. As models have improved, so has the system\u2019s precision and usefulness. Since the recent launch, Codex Security has contributed to over 3,000 critical and high fixed vulnerabilities, along with many more lower-severity fixed findings across the ecosystem.<\/em><\/li>\n<li class=\"mb-4xs\"><em>Across these releases, we\u2019ve also refined how models handle sensitive requests, calibrating refusal boundaries while expanding trusted access through programs like TAC.<\/em><\/li>\n<\/ul>\n<\/li>\n<li class=\"mb-4xs\"><em><b>Software development itself must be made more secure.\u00a0<\/b>The strongest ecosystem is one that continuously identifies, validates, and fixes security issues as software is written. By integrating advanced coding models and agentic capabilities into developer workflows, we can give developers immediate, actionable feedback while they are building, shifting security from episodic audits and static bug inventories to ongoing, tangible risk reduction.<\/em><\/li>\n<\/ul>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-xl\">\n<div id=\"scaling-trusted-access-for-cyber-and-gpt-54-cyber\" class=\"max-w-container @container grid w-full grid-cols-12 toc-content-heading scroll-mt-[calc(var(--header-h)+var(--toc-button-h))]\">\n<div class=\"full-grid-content:@md:col-span-12 full-grid-content:@md:col-start-1 col-span-12 max-w-none @md:col-span-6 @md:col-start-4\">\n<h2 class=\"text-h3 scroll-mt-[calc(var(--header-h)+var(--toc-button-h))]\"><em>Scaling Trusted Access for Cyber and GPT\u20115.4\u2011Cyber\u00a0<\/em><\/h2>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>We want to empower defenders by giving broad access to frontier capabilities, including models which have been tailor-made for cybersecurity. In February, we introduced\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/index\/trusted-access-for-cyber\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Trusted Access for Cyber<\/u>\u2060<\/a>\u00a0(TAC) with both automated identity verification for individuals in order to reduce the friction of safeguards on cybersecurity-related tasks and partner with a limited set of organizations for more cyber-permissive models.<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>Today we\u2019re expanding this program by introducing additional tiers of access for users willing to work with OpenAI to authenticate themselves as cybersecurity defenders. Customers in the highest tiers will get access to GPT\u20115.4\u2011Cyber, a model purposely fine-tuned for additional cyber capabilities and with fewer capability restrictions. This is a version of GPT\u20115.4 which lowers the refusal boundary for legitimate cybersecurity work and enables new capabilities for advanced defensive workflows, including binary reverse engineering capabilities that enable security professionals to analyze compiled software for malware potential, vulnerabilities and security robustness without needing access to its source code.<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>Because this model is more permissive, we are starting with a limited, iterative deployment to vetted security vendors, organizations, and researchers. Access to permissive and cyber-capable models may come with limitations, especially around no-visibility uses like\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/developers.openai.com\/api\/docs\/guides\/your-data#zero-data-retention\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">Zero-Data Retention<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0(ZDR). This is particularly true for developers and organizations accessing our models through third-party platforms where OpenAI may have less direct visibility into the user, the environment, or the purpose of the request.\u00a0<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>Gaining access to TAC is straightforward:<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<ul class=\"mb-md marker:text-inherit last:mb-0 in-[:where(ul,ol)]:mt-2 list-disc in-[:where(ul,ol)]:list-[circle] ps-2xs mx-3xs\">\n<li class=\"mb-4xs\"><em>Individual users can verify their identity at<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"http:\/\/chatgpt.com\/cyber?openaicom-did=81804ab9-3974-4afa-b3e0-ae02ea6a8c3f&amp;openaicom_referred=true\" target=\"_blank\" rel=\"noopener\">\u00a0<u class=\"decoration-1 underline-offset-4\">chatgpt.com\/cyber<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>.\u00a0<\/em><\/li>\n<li class=\"mb-4xs\"><em>Enterprises can\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/openai.com\/form\/enterprise-trusted-access-for-cyber\/\" target=\"_blank\" rel=\"noopener\"><u class=\"decoration-1 underline-offset-4\">request trusted access<\/u>\u2060<\/a>\u00a0for their team through their OpenAI representative.\u00a0<\/em><\/li>\n<\/ul>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>All customers approved through this process will gain access to versions of existing models with reduced friction around safeguards which might trigger on dual-use cyber activity, allowing them to continue to support security education, defensive programming, and responsible vulnerability research. Customers already in TAC willing to further authenticate themselves as legitimate cyber defenders\u00a0<a class=\"transition ease-curve-a duration-250 text-primary-100 hover:text-primary-60 relative underline-offset-[0.25rem] decoration-1 underline\" href=\"https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLSea_ptovrS3xZeZ9FoZFkKtEJFWGxNrZb1c52GW4BVjB2KVNA\/viewform\" target=\"_blank\" rel=\"noopener noreferrer\"><u class=\"decoration-1 underline-offset-4\">can express interest<\/u>\u2060<span class=\"sr-only\">(opens in a new window)<\/span><\/a>\u00a0in additional tiers of access, including requesting access to GPT\u20115.4\u2011Cyber.<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-xl\">\n<div id=\"looking-ahead-to-our-upcoming-model-release-and-beyond\" class=\"max-w-container @container grid w-full grid-cols-12 toc-content-heading scroll-mt-[calc(var(--header-h)+var(--toc-button-h))]\">\n<div class=\"full-grid-content:@md:col-span-12 full-grid-content:@md:col-start-1 col-span-12 max-w-none @md:col-span-6 @md:col-start-4\">\n<h2 class=\"text-h3 scroll-mt-[calc(var(--header-h)+var(--toc-button-h))]\"><em>Looking ahead to our upcoming model release and beyond<\/em><\/h2>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>Our cybersecurity defenses are the result of many months of iterative improvement. We believe the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models. We expect versions of these safeguards to be sufficient for upcoming more powerful models, while models explicitly trained and made more permissive for cybersecurity work require more restrictive deployments and appropriate controls.<\/em><\/p>\n<\/div>\n<div class=\"@md:col-span-6 @md:col-start-4 col-span-12 max-w-none [&amp;:not(:first-child)]:mt-sm\">\n<p class=\"mb-sm last:mb-0\"><em>Over the long term, to ensure the ongoing sufficiency of AI safety in cybersecurity, we also expe<\/em>ct the need for more expansive defenses for future models, whose capabilities will rapidly exceed even the best purpose-built models of today.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>De concurrentie tussen AI-bedrijven wordt steeds heviger, en dat blijkt opnieuw uit de nieuwste ontwikkelingen in de wereld van kunstmatige intelligentie. OpenAI heeft namelijk zijn nieuwe model GPT-5.4 Cyber aangekondigd, slechts korte tijd nadat Anthropic zijn eigen geavanceerde cybermodel Claude Mythos presenteerde. Daarmee zetten beide bedrijven vol in op een nieuwe markt: AI-systemen die speciaal ontworpen zijn voor cybersecurity.\u00a0Volgens OpenAI is GPT-5.4 Cyber ontwikkeld om beveiligingsonderzoekers en experts te ondersteunen bij het opsporen van kwetsbaarheden in software, het analyseren van beveiligingslekken en het verbeteren van digitale verdediging. Het model zou verder gaan dan eerdere AI-versies door beter technische context te begrijpen en complexere beveiligingsproblemen te kunnen analyseren.<\/p>\n","protected":false},"author":2,"featured_media":714823,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[338],"tags":[6210,6031],"class_list":["post-714822","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-applicatie-tool-ai-business-marketing-ai","tag-ai","tag-cyber"],"_links":{"self":[{"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/posts\/714822","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/comments?post=714822"}],"version-history":[{"count":5,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/posts\/714822\/revisions"}],"predecessor-version":[{"id":714828,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/posts\/714822\/revisions\/714828"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/media\/714823"}],"wp:attachment":[{"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/media?parent=714822"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/categories?post=714822"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nieuws.marketing\/strategie_nieuws\/wp-json\/wp\/v2\/tags?post=714822"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}