{"id":254,"date":"2026-04-22T09:57:58","date_gmt":"2026-04-22T09:57:58","guid":{"rendered":"https:\/\/aiandtech.com\/?p=254"},"modified":"2026-04-22T09:57:58","modified_gmt":"2026-04-22T09:57:58","slug":"chatgpts-new-image-model-turned-my-article-into-handwriting","status":"publish","type":"post","link":"https:\/\/aiandtech.com\/?p=254","title":{"rendered":"ChatGPT\u2019s new image model turned my article into handwriting"},"content":{"rendered":"<div id=\"primary\">\n<div>\n<div>\n<p><span>Image: Ben Patterson\/Foundry<\/span>\t\t\t\t<\/p>\n<\/p><\/div>\n<div>\n<div id=\"link_wrapped_content\">\n<body><\/p>\n<div>\n<p><span>Summary created by Smart Answers AI<\/span><\/p>\n<h3 id=\"in-summary\">In summary:<\/h3>\n<ul>\n<li>PCWorld tested ChatGPT\u2019s new Images 2.0 model, which demonstrates remarkable accuracy in rendering text within AI-generated images, including handwritten styles.<\/li>\n<li>The upgraded model is now available to all users and introduces enhanced capabilities like web searching, infographic creation, and multi-language support including non-Latin scripts.<\/li>\n<li>Images 2.0\u2019s improved text rendering opens practical applications for creating catalogs, storyboards, and detailed technical documentation with perfect textual accuracy.<\/li>\n<\/ul>\n<\/div>\n<p>Image-generation models have a long history of bungling text. But while garbled letters used to be a clear AI tell, ChatGPT\u2019s new image-generation tool is the best I\u2019ve ever seen at rendering text.<\/p>\n<p>I asked ChatGPT\u2019s Images 2.0 model (available now to all ChatGPT users, including those on the free tier) to take some text from a recent story of mine and render it in pencil on a yellow legal pad and, well, it looks pretty much perfect to me:<\/p>\n<div>\n<figure data-wp-context=\"{\"imageId\":\"69e8963a60131\"}\" data-wp-interactive=\"core\/image\"><\/figure>\n<p>Ben Patterson\/Foundry<\/p>\n<\/div>\n<p>I also prompted it to create an infographic about AI tokens, instructing it first to search the web for accurate information and to use a serif font in a landscape 3:2 aspect ratio. Here\u2019s what I got:<\/p>\n<div>\n<figure data-wp-context=\"{\"imageId\":\"69e8963a60d9f\"}\" data-wp-interactive=\"core\/image\"><\/figure>\n<p>Ben Patterson\/Foundry<\/p>\n<\/div>\n<p>Then I tasked Images 2.0 with creating another infographic, this time detailing the various Raspberry Pi models complete with specifications and other details:<\/p>\n<div>\n<figure data-wp-context=\"{\"imageId\":\"69e8963a61950\"}\" data-wp-interactive=\"core\/image\"><\/figure>\n<p>Ben Patterson\/Foundry<\/p>\n<\/div>\n<p>Finally, I asked the model to take a snapshot of me poolside and create a summer lookbook of outfits, starring me:<\/p>\n<div>\n<figure data-wp-context=\"{\"imageId\":\"69e8963a623c8\"}\" data-wp-interactive=\"core\/image\"><\/figure>\n<p>Ben Patterson\/Foundry<\/p>\n<\/div>\n<p>OpenAI says Images 2.0 is its first image-generation model with \u201cthinking\u201d capabilities, meaning it can stop and ponder an image prompt before diving right in.\u00a0<\/p>\n<p>When it comes to text, Images 2.0 supports a variety of languages, including Japanese, Korean, Chinese, Hindi, Bengali, and others that employ non-Latin text.\u00a0<\/p>\n<p>It can also search the web for real-time information before rendering images, as well as create multiple images in one shot, good for rendering catalog images, comicbook-style panels, and storyboards.<\/p>\n<p>OpenAI promises that Images 2.0 will deliver an \u201cunprecedented level of specificity and fidelity,\u201d meaning (hopefully) that it will do a better job at prompt adherence\u2013that is, creating images that follow your prompts to the letter.<\/p>\n<p>With this level of accuracy, Images 2.0 could offer an answer to the question I\u2019ve long asked about image-generating models: What are they good for, aside from creating goofy memes or creepy deepfakes? What\u2019s the actual, practical application?<\/p>\n<p>Near-instant typesetting, infographic creation, and catalog rendering could be some of the solutions, although fixing a typo would require completely re-rendering the image. <\/p>\n<p>It\u2019s also possible that the more you experiment with Images 2.0 (I\u2019ve only been playing with it for an hour or so), the more the rendered images may look same-y, which is why you\u2019d likely need a skilled human prompter with an eye for design at the helm.<\/p>\n<p><\/body><\/div>\n<div data-ga=\"article-footer-author\">\n<h3>\n<p>\t\tAuthor: Ben Patterson, Senior Writer, PCWorld\t\t<\/h3>\n<div>\n<p>Ben has been writing about consumer technology for more than 20 years, and now focuses his reporting on AI as it relates to the basic human experience. His coverage of artificial intelligence interrogates the latest LLMs, and how they can be used at work and at home to be best prepared for the AI revolution. \u201cAI is going to change our lives sooner than we think,\u201d Ben writes. \u201cOur best way to adapt is by using it every day.\u201d Ben has been a PCWorld author since 2014, and has covered everything from laptops to security cameras before launching PCWorld\u2019s AI beat. Ben&#8217;s articles have also appeared in PC Magazine, TIME, Wired, CNET, Men&#8217;s Fitness, Mobile Magazine, and more. Ben holds a master&#8217;s degree in English literature.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Image: Ben Patterson\/Foundry Summary created by Smart Answers AI In summary: PCWorld tested ChatGPT\u2019s new Images 2.0 model, which demonstrates remarkable accuracy in rendering text within AI-generated images, including handwritten styles. The upgraded model is now available to all users and introduces enhanced capabilities like web searching, infographic creation, and multi-language support including non-Latin scripts. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":255,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jnews-multi-image_gallery":[],"jnews_single_post":[],"jnews_primary_category":[],"footnotes":""},"categories":[1],"tags":[],"class_list":["post-254","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts\/254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=254"}],"version-history":[{"count":0,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts\/254\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/media\/255"}],"wp:attachment":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}