Deep Agency is a Dutch developer’s attempt to combine the worlds of artificial intelligence photography and modeling. High-resolution self-portraits against a variety of backdrops, as well as AI-generated images in response to a given prompt, are yours for just $29 per month. “Use online avatar creators to make a duplicate of yourself with a look that’s spot-on. Upgrade your photography skills and put an end to boring photo sessions, the site encourages.
The creator, Danny Postma, claims that the platform is available worldwide and uses cutting-edge text-to-image AI models, such as DALL-E 2. You can edit your photo on the site by specifying the model’s pose and providing additional instructions.
The opposite of making models, photographers, and artists obsolete is this site. Although Postma warns on Twitter that “things will break” and the site is “in open beta,” the experience of using it is almost comical, like a more advanced version of DALL-E 2 in which you can only generate female models. Thus, the website serves as a timely reminder of AI’s shortcomings, specifically how the generated images by AI are not only extremely rigid and easily detectable but also highly biased.
A paid subscription unlocks three other models (one female and two male) and allows you to upload your own images to create an “AI twin,” but until then, the prompt must include “sks female” for the model to work, so the site only generates images of women.
Enter a prompt, select a pose from the site’s image library, and tweak the time and weather, camera, lens and aperture, shutter speed, and lighting to create a unique image. To date, it seems that none of these parameters have been set, as the majority of generated images feature the same overly-lit portrait of a woman against a very blurred background.
If you select an image of a woman of a different race or likeness from the catalog and then say “sks female,” the prompt will still produce an image of a blonde white woman. In order to alter the appearance of the model, you must use a string of words to specify the sex, age, and race of the subject. To illustrate, when Motherboard used the site to generate an image based on one of its stock photos and prompts depicting a person of color wearing a religious headscarf, the resulting photo showed a white woman wearing a fashion headscarf.
The DALL-E 2 text-to-image generator from OpenAI has been shown to be rife with inherent biases. When asked to generate an image of “a flight attendant,” for instance, the generator displays only females, and when asked to generate an image of “a CEO,” it predominantly shows images of white men. Even though OpenAI has acknowledged that it is working to improve its system, it has been difficult to identify the precise origins of the biases and fix them. If a photography studio is set up using a flawed prototype, the same problems will arise.