{"id":264,"date":"2026-04-22T16:41:16","date_gmt":"2026-04-22T16:41:16","guid":{"rendered":"https:\/\/aiandtech.com\/?p=264"},"modified":"2026-04-22T16:41:45","modified_gmt":"2026-04-22T16:41:45","slug":"chatgpts-new-image-model-turned-my-article-into-handwriting","status":"publish","type":"post","link":"https:\/\/aiandtech.com\/?p=264","title":{"rendered":"ChatGPT\u2019s new image model turned my article into handwriting"},"content":{"rendered":"<p>Image-generation models have long struggled with text. In many cases, garbled letters were an easy sign that an image was AI-generated. ChatGPT\u2019s new image tool, Images 2.0, is the best I\u2019ve seen so far at getting text right.<\/p>\n<p>I asked Images 2.0, now available to all ChatGPT users including free accounts, to take text from a recent article and write it in pencil on a yellow legal pad. The result looked almost perfect.<\/p>\n<p>I also prompted it to create an infographic about AI tokens. I told it to search the web for accurate information first, use a serif font, and work in a landscape 3:2 aspect ratio. The output was strong and cleanly laid out.<\/p>\n<p>Next, I asked it to build another infographic, this time showing different Raspberry Pi models along with specifications and other details. It handled that well too.<\/p>\n<p>I also gave it a photo of me poolside and asked it to generate a summer lookbook featuring outfits centered on that image. The result showed how far the model has come in following detailed creative prompts.<\/p>\n<p>OpenAI says Images 2.0 is its first image model with \u201cthinking\u201d capabilities, which means it can pause and reason through a prompt before generating an image. It also supports text in multiple languages, including Japanese, Korean, Chinese, Hindi, Bengali, and other non-Latin scripts.<\/p>\n<p>Another major upgrade is the ability to search the web for current information before creating images. The model can also generate multiple images in a single request, which should be useful for catalog layouts, comic-style panels, and storyboards.<\/p>\n<p>OpenAI says the model is designed to deliver a much higher level of specificity and fidelity, which should improve prompt adherence and make the results match instructions more closely.<\/p>\n<p>That raises an interesting practical question: what are image-generation models actually good for beyond memes and deepfakes? Better text rendering points to some real uses, including fast typesetting, infographic creation, and catalog design.<\/p>\n<p>There are still limits. If you need to fix a typo, the image usually has to be regenerated from scratch. And as with many AI image tools, repeated use may lead to results that start to feel visually repetitive, which makes human design judgment still important.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Image-generation models have long struggled with text. In many cases, garbled letters were an easy sign that an image was AI-generated. ChatGPT\u2019s new image tool, Images 2.0, is the best I\u2019ve seen so far at getting text right. I asked Images 2.0, now available to all ChatGPT users including free accounts, to take text from [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":265,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jnews-multi-image_gallery":[],"jnews_single_post":{"format":"standard","override":[{"template":"1","parallax":"1","fullscreen":"1","layout":"right-sidebar","sidebar":"default-sidebar","second_sidebar":"default-sidebar","sticky_sidebar":"1","share_position":"top","share_float_style":"share-monocrhome","show_share_counter":"1","show_view_counter":"1","show_featured":"1","show_post_meta":"1","show_post_author":"1","show_post_author_image":"1","show_post_date":"1","post_date_format":"default","post_date_format_custom":"Y\/m\/d","show_post_category":"1","show_post_reading_time":"0","post_reading_time_wpm":"300","post_calculate_word_method":"str_word_count","show_zoom_button":"0","zoom_button_out_step":"2","zoom_button_in_step":"3","show_post_tag":"1","show_prev_next_post":"1","show_popup_post":"1","show_comment_section":"1","number_popup_post":"1","show_author_box":"0","show_post_related":"0","show_inline_post_related":"0"}],"image_override":[{"single_post_thumbnail_size":"crop-500","single_post_gallery_size":"crop-500"}],"trending_post_position":"meta","trending_post_label":"Trending","sponsored_post_label":"Sponsored by","disable_ad":"0"},"jnews_primary_category":[],"footnotes":""},"categories":[1],"tags":[],"class_list":["post-264","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts\/264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=264"}],"version-history":[{"count":1,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts\/264\/revisions"}],"predecessor-version":[{"id":266,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/posts\/264\/revisions\/266"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=\/wp\/v2\/media\/265"}],"wp:attachment":[{"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aiandtech.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}