Home Education Product Ideation in the Age of Artificial Intelligence: Insights on Design Process Through Shape Coding Social Robots
Article Open Access

Product Ideation in the Age of Artificial Intelligence: Insights on Design Process Through Shape Coding Social Robots

  • Shujoy Chakraborty ORCID logo EMAIL logo , Dirk Loyens ORCID logo and Jeremy Aston ORCID logo
Published/Copyright: August 16, 2025

Abstract

This research explores the impact of generative artificial intelligence (GenAI) on ideation and concept design of social robots capable of undertaking sustained long-duration human–robot interaction. The work reported here was developed between 2021 and 2024 through classroom teaching executed in four editions of 3-day project workshops involving 36 product design master students producing 27 concept design proposals in a European Higher Education Institute (HEI). The first two workshop editions used only classical methods utilising semantic moodboarding, sketching, virtual 3D modelling, and rendering. The last two editions employed mixed methods blending classical methods with computational methods using text-to-image and sketch-to-image GenAI tools, like Midjourney, DALL-E, and Vizcom. The findings suggest that using mixed methods, which co-creates by combining organic and synthetic creativity, enhances the concepts’ numeric quantity, although the concepts’ creative quality remains questionable. The advantage of the computationally enhanced mixed methods over the traditional classical methods is the greater potential to overcome creative blockages in novice designers with weak ideation skills. Increasing the volume of concept exploration increases the serendipitous probability of arriving at successful outcomes. This research is a case study of GenAI implementation in classroom teaching, highlighting its benefits and limitations for design courses in HEIs.

1 Introduction

A social robot is a robot able to act autonomously and interact with humans using social cues (Kędzierski, Kczmareka, Dziergwa, & Tchoń, 2015). Three accepted morphologies are classified in the current literature on this product category: humanoid, semi-humanoid, and petlike (Causo et al., 2016). While this is an emerging product category, recent literature is culminating in proposing various definitions of the features and capabilities of social robots. These machines are classified as artificial agents capable of interacting with humans using verbal and non-verbal communication cues (David, Thérouanne, & Milhabet, 2022). Social robots are becoming integral to human lives, from companionship, healthcare to customer service. The form design or shape coding of these robots is critical in ensuring user acceptance, trust, and functionality (Shukman, 2015). Recent discussions emerging from the design and engineering community working with social robots point out that lively and expressive behaviour can cause the human user to project social intelligence onto such machines (Boschetti, 2014). The degree of anthropometric resemblance of a social robot’s external appearance directly relates to the attribution of social capability (Hegel, 2012). However, the well-documented literature on the Uncanny Valley phenomenon (Mori, MacDorman, & Kageki, 2012) warns of limitations in approaching the zoomorphic shape codes of a social robot.

The study presented in this research was executed over 4 years of classroom teaching (2021–2024) and focused on exploring shape coding of social robots capable of undertaking long-duration interaction with human users (Kędzierski et al., 2014). To achieve a more structured and discreet exploration outcome, four narrower, specialised subcategories were differentiated by roles: companion robot (2021), healthcare assistive robot (2022), educational robot (2023), and general-purpose domestic robot (2024), respectively. These roles were thus selected after consultation between the authors due to their relevance to the everyday life of twenty first-century society.

Recent developments of inserting generative artificial intelligence (GenAI) technology and large language models (LLMs) into robotics permit the emergence of social robots capable of evermore human-like interactions with people (Claburn, 2024), sometimes with their associated risks and yet unknown threats. These robots cover areas such as general personal assistance, personal sports training, education, personal coaching, and general companionship. Therefore, building these machines to promote acceptance and trust is not only the subject of software design but also a question for industrial product design.

How should roboticists shape code the social robot, which achieves positive social dynamics between robots and humans, promoting human–robot interaction (HRI) and thereby reducing the separation between the two (Kędzierski et al., 2015)?

When addressing the design question of social robots, the heart of the issue concerns selecting the most appropriate shape code between the three defined morphologies in existing literature: humanoid, semi-humanoid, or petlike (Causo et al., 2016). At which point do people start experiencing discomfort, and where lies the threshold when the uncanny valley phenomenon triggers when interacting with anthropomorphic robots (Hegel, 2012; Kędzierski et al., 2015)? These are the two main guiding research questions that roboticists must confront when approaching the industrial product design of social robots and all their associated specialised sub-categories. Shape coding is, therefore, a critical intervention in the design process of social robots.

This research explores the issue of shape coding such machines through teaching concept design projects in a university master course studio setting, utilising either classical methods based on traditional ideation tools or mixed methods that blend traditional ideation tools with computational text-to-image GenAI tools.

Comparing and contrasting the two approaches, traditional and hybrid, allows for a critical analysis of the output and the respective productivity impact of deploying artificial intelligence (AI)-based synthetic creativity over the human intelligence (HI)-based organic creativity in ideation work.

Ideation in product design is a dynamic and exploratory activity, generating initial, low-fidelity concept designs that have not yet been evaluated for feasibility, allowing creativity to flow freely. It is a critical step in the design process lying within the concept design phase, where designers explore initial impressions, challenge assumptions, explore new angles, and develop many possible ideas (Dorst & Cross, 2001). One of the primary objectives of ideation is to explore various possible design solutions for the form and geometry of the industrial product (Brown, 2009; Ulrich & Eppinger, 2012). This divergent thinking activity is crucial for identifying innovative and effective ways to create product forms that are visually appealing and aligned with the project’s design objectives (Dorst & Cross, 2001; Simeone, Mantelli, & Adamo, 2022).

During ideation work, designers employ various techniques to externally visualise their internal divergent thinking. Designers generate product ideas, often as shape codes, visualising different shapes, forms, and geometry options. The most common classical methods are sketching, virtual 3D modelling, and rapid prototyping. These methods allow designers to quickly iterate on their ideas, refine concepts, and gather feedback from stakeholders and users (Tovey, Porter, & Newman, 2003).

Text-to-image GenAI technology is emerging as a computational tool in the field of art and design in general and industrial product design in particular, offering a possible novel approach to concept generation yet to be properly assessed and studied for effectiveness. This technology utilises LLMs, machine learning algorithms, deep learning, and neural networks to generate high-resolution visual representations with appealing results based on textual input (Ramesh et al., 2021). Various commercial software tools are readily available for purchase through a subscription-based model utilising this technology. This study utilised a text-to-image and a sketch-to-image software, with the trade names Midjourney and Vizcom, respectively.

The scientific contribution of this study is to improve the current understanding of these GenAI tools held within the design community by offering insights on the benefits and pitfalls of their adoption in the concept design phase of any design process model. Such a contribution can offer insights into the future potential of such GenAI tools in product design education for training future professionals.

2 State of the Art – GenAI and Design Process

2.1 Current Scenario of GenAI in Design

Creativity can be positioned as a combination of novelty and usefulness (Doshi & Hauser, 2024; Harvey & Berry, 2023). The quality of creative ideas can, therefore, be judged on the level of their novelty and the usefulness to the problem they are addressing to resolve. Research on measuring the impact of GenAI on creativity still needs more development, especially in the disciplinary area of product design. The interaction of organic human creativity assisted by GenAI tools remains largely underexplored. Understanding how such computational tools can be integrated into a classical design process workflow and measuring their influence on the quality and depth of the concepts generated is a subject matter still under study. Studies on the impact of GenAI on general creativity have only recently emerged (Doshi & Hauser, 2024), and studies with a narrower focus on product design specifically have yet to address the potential and limitations of GenAI augmented ideation and co-creation of concepts.

The published literature on the subject argues that the deployment of GenAI increases overall creativity in a sample group of creators but ultimately reduces the results’ diversity (Doshi & Hauser, 2024). Furthermore, it is the creators, weak in creativity, whose output benefits most through accessing GenAI, and conversely, highly creative creators do not benefit from access to GenAI and using it may even degrade their output quality (Doshi & Hauser, 2024; Figoli, Rampino, & Mattioli, 2022).

Current generation, text-to-image, and sketch-to-image GenAI tools cannot generate 3D forms and models, limiting their output to 2D shapes and images. Although text-to-image GenAI tools like Midjourney can generate a vast library of image proposals, they fail to understand domain-specific terms and generate solutions unsuitable for manufacturing purposes (Zhang, Wang, Pangaro, Martelaro, & Byrne, 2023); therefore, they often visualise surprising and unexpected results, which can both detract from or boost the ideation work. In fact, this probabilistic nature of GenAI has been debated in recent literature (Hicks, Humphries, & Slater, 2024), which questioned whether LLM-based GenAI technologies understand any of the output they produce at all, arguing that text-based GenAI applications are “bullshit machines.”

A word of caution to designers considering to deploy GenAI in ideation, which merits attention, is that the dependence of GenAI tools on training data may condition the quality of output depending on the product category being designed. For product categories where a large amount of training data exists, potentially classical products, the probability of encountering creative and innovative proposals is higher than in emerging or futuristic product categories, on which much creative work may not have occurred. The issue of design fixation (Hoggenmueller, Lupetti, Van Der Maden, & Grace, 2023) has been discussed while deploying GenAI for experimenting with creative ideation on social robotics. Design fixation is a “blind adherence” (Jansson & Smith, 1991) to set ideas and concepts that can actually retrogress instead of progressing ideation work to explore new directions.

Regardless of the product category under consideration, the issue of inserting textual descriptions in an appropriately structured fashion, or “prompt engineering,” must be given due consideration to achieve more constructive and valuable results.

With these limitations in mind, due consideration must be given while selecting the GenAI platforms to be used, as they are most likely built on qualitatively different training data (Hoggenmueller et al., 2023) and have different prompt engineering requirements. Nevertheless, due to the recent nature of these software tools, such information remains a “black box” for all practical purposes, and designers must adopt a trial-and-error approach, consequently accepting the probabilistic nature of the output these tools offer.

However, even with these limitations considered, text-to-image GenAI technology has the potential to reshape the ideation work in the concept design phase by acting as a catalyst towards designers through its very quality of generating a numerically vast number of proposals with the inherent element of surprise hard-baked into its mechanics. This probabilistic nature of the GenAI tools introduces the element of serendipity into the ideation workflow, thereby causing unexpected changes in direction due to this very feature.

The resulting benefit to the workflow is the ability to broaden the exploration of possible solutions, which novice designers may otherwise never have considered if working exclusively with traditional tools based on classical methods (Chiou, Hung, Liang, & Wang, 2023). Thus, GenAI assists novice designers in overcoming creative blocks by stimulating their imagination through visual stimuli and providing inspiration.

2.2 Novelty of this Research

The authors of this study propose an effective strategy to insert text-to-image and sketch-to-image GenAI tools in the product design process and identify ideation, which occurs during the concept generation phase of any design process model, as the appropriate moment to insert such tools and technologies. By adopting a hybrid approach, combining computational tools like GenAI with classical tools like drawing and modelling, i.e. blending, this research will demonstrate an effective technique to exploit GenAI for achieving quality results.

Insertion of GenAI tools anywhere into the design process is yet a very much emerging area of design research (Figoli et al., 2022). The resulting human-AI collaborative co-design paradigm has been cited as a phenomenon with much promise and potential in existing literature (Figoli et al., 2022). With the emergence of agentic AI technologies and reasoning models in 2025, this human agent and AI system co-design issue will open up even more interesting opportunities.

The strategy of inserting GenAI increases the sheer number of concept designs generated, thus allowing for rapid exploration of a wide range of ideas (Figure 2), allowing the designers to quickly generate multiple form representations by modifying the input text (Kulkarni et al., 2023). AI systems can throw “random visual stimuli” (Figoli et al., 2022), encouraging designers to explore novel and unconventional ideas. The GenAI tools used in this study (Midjourney and Vizcom) can visualise concepts that may be difficult or time-consuming to create traditionally through classical techniques such as drawing or 3D modelling.

However, it is essential to maintain recognition of the limitations of text-to-image GenAI in the ideation phase, as already discussed above. While the technology can generate visually appealing and diverse concepts, it may need more contextual understanding and domain-specific knowledge, i.e. originality, manufacturability, materials, and human–product interaction, among others. This is knowledge that human designers supposedly possess; therefore, this limitation of GenAI validates the presence of the creative human designer in the workflow.

Indeed, as per the definition of creativity used in this article, GenAI only partially fulfils the definition, offering only novelty but completely disregarding utility or usefulness. In this sense, GenAI is a good assistive boost to the already creative human designer, though not a replacement for them. The rigour of the current model of the university classroom studio-based training of product design students focuses quite robustly on the competency to critically assess, dispassionately judge, and unbiasedly qualify concept designs against novelty and usefulness. This capability is combined with soft s’kills based on tacit knowledge, reading social cues, identifying cultural appropriateness, and some hard skills, including understanding manufacturability, technical limitations, and economic viability. It is essential to understand that the current generation of Genai does not encapsulate all the competencies that a human designer is understood to possess through classroom training.

Additionally, the generated concepts may not always align with the specific constraints, requirements, or aesthetic preferences of a given design project. Despite these limitations, studies have shown that using text-to-image GenAI in ideation, designers can explore a broader range of ideas and push the boundaries of their creativity (Paananen, Oppenlaender, & Visuri, 2023). However, to ensure the generated concepts’ quality and relevance, designers must critically evaluate and refine AI-generated ideas with rigorous post-production work, combining them with their own expertise and judgement.

Finally, designing social robots presents a uniquely challenging product category to students because, as a novel emerging product category, the aesthetic quality based on form and composition needs to be carefully balanced with meaning transmission based on language and cultural context to arrive at a design solution with high social acceptability. The lack of existing visual vocabulary to refer to terms of shape codes, pitfalls of social biases, preconceived notions, and the phenomenon of design fixation are threats that can contaminate the results of the GenAI tools by leaking into their training data, thus holding the potential to easily distract a novice designer towards suboptimal solutions.

The designers, therefore, need to cast a very wide net in prompting GenAI during the very beginning of a design process in form exploration exercises where ideation usually occurs, to extract the maximum benefit of this technology. In novice designers, especially those not yet well trained on traditional ideation methods, denying access to GenAI may limit their ideation creativity. The authors have first-hand empirical experience of students struggling with the complexity of finding an equilibrium between aesthetic expression and meaning expression while ideating form, shape codes of products. Furthermore, the iterative and linear nature of sketching and 3D model making is time consuming and may restrict more profound exploration of unconventional form solutions. Introducing text-to-image GenAI tools into the mix may make it possible to increase the divergence of the exploration of conceptual forms, although in theory using GenAI for emerging products may not always be helpful, given that GenAI generates visuals based on training data. Currently, there may not be a lot of existing visual content on social robot designs to refer to when generating proposals.

3 Methodology

3.1 Methodology Design

This research is structured as a classroom teaching-based exploratory case study focused on the particular subject of how the ideation work in the concept design phase in a design process is impacted by the introduction of text-to-image and sketch-to-image GenAI tools. Specifically, it examines the potential benefits for students, how their work flow is affected, and the impact on the degree of divergence in ideation. The execution strategy of this research was formatted as design workshops in a practical studio project subject involving Master-level design students. Four workshop editions were conducted at a European Higher Education Institution (HEI) between 2021 and 2024.

The workshop project theme was to shape-code concept designs for social robots with semi-human or pet-like morphologies, with a different specialised subcategory in each workshop edition (companion robot, healthcare assistive robot, educational robot, and general-purpose domestic robot, respectively). The limitation on morphology was imposed because existing literature on social robotics highlights the pitfalls of the uncanny valley phenomenon when approaching humanoid robots (Causo, Vo, Chen, & Yeo, 2016). The respective subcategories were selected by consensus among the co-authors, who agreed on their relevance based on emerging literature in the social robots HRI community.

The first two editions (2021, 2022) employed traditional organic HI ideation methods such as semantic moodboarding, sketching, 3D modelling, and conceptual rendering. The last two editions (2023, 2024) integrated organic HI ideation with synthetic GenAI-assisted ideation methods based principally on Midjourney (n.d.), Vizcom (n.d.), and occasions also DALL-E (OpenAI, n.d.).

The inclusion and exclusion of GenAI in the workshop editions allowed the authors to undertake a comparative analysis of the general process and resultant output occurring in the two test conditions – with and without the injection of GenAI tools in the concept development tasks given to the students.

The decision to insert GenAI into this research on shape coding social robotics was to test the viability of this technology towards concept generation and identify the position of the authors regarding the intense debates surrounding this technology, 2022 onward. The specific tools indicated above were thus selected due to the significant attention in the design community surrounding their emergence in 2022, as well as their commercial availability during this period. Bearing in mind that Midjourney required a subscription to access beyond 50 generation cycles, DALL-E was used as a backup in case the students could not achieve the desired results within the limited access, as DALL-E had free access during this time.

This research used two kinds of GenAI tools: text-to-image and sketch-to-image. Text-to-image tools like Midjourney and DALL.E generate computer-rendered images based on a descriptive text input or prompt. Sketch-to-image tools like Vizcom are particularly targeted towards the product design community and work on generating images based on sketches, which the designer must first upload into the platform. The hand-drawn sketch, combined with a text prompt, which complements the sketch, then triggers the generation of the generative computer-rendered image. Both the typologies of GenAI tools can combine the generated synthetic images with other existing images which the designer may possess (sketch, moodboard, visual benchmark research) and it is possible to generate new images combining these two images, i.e. blend, to generate yet another image combining the visual information and shape codes to achieve a more sophisticated and aesthetically targeted outcome. Vizcom has almost real-time rendering capabilities, allowing students to transform basic analogue line sketches into computer-generated graphics for immediate presentation. Both these types of tools can be used in conjunction or in a complementary fashion, providing the student with a toolkit of GenAI that aligns well with the experimental needs of product design ideation, needing rapid and free-flowing form exploration mechanics. Figure 1 explains the user interface (UI) of Midjourney and Vizcom, explaining the differences between various text-to-image and sketch-to-image operations.

Figure 1 
                  The UI of Midjourney (top) and Vizcom (bottom). Top left, referring to the project Vivo (educational robot), shows Midjourney image generation operation, which works on the Discord server platform through a bot-based interaction. Each iteration of the prompt cycle generates four images. The designer may select the most preferred image to trigger the next iteration cycle. The top right shows the image blending operation in Midjourney. The designer picked an image from a semantic moodboard of “calm” and combined it with the Midjourney-generated synthetic image to achieve blended results. Bottom referring project Hug (general purpose assistive robot) shows the UI setup of Vizcom, the central canvas space is where the base reference sketch of the designer is uploaded, the right column contains the prompt box, below the prompt area is the style reference image upload control which allows a aesthetic influence reference image to be uploaded, and lastly, at the bottom is the slider which adjusts the level of intervention the designer grants to the GenAI to modify the image.
Figure 1

The UI of Midjourney (top) and Vizcom (bottom). Top left, referring to the project Vivo (educational robot), shows Midjourney image generation operation, which works on the Discord server platform through a bot-based interaction. Each iteration of the prompt cycle generates four images. The designer may select the most preferred image to trigger the next iteration cycle. The top right shows the image blending operation in Midjourney. The designer picked an image from a semantic moodboard of “calm” and combined it with the Midjourney-generated synthetic image to achieve blended results. Bottom referring project Hug (general purpose assistive robot) shows the UI setup of Vizcom, the central canvas space is where the base reference sketch of the designer is uploaded, the right column contains the prompt box, below the prompt area is the style reference image upload control which allows a aesthetic influence reference image to be uploaded, and lastly, at the bottom is the slider which adjusts the level of intervention the designer grants to the GenAI to modify the image.

Shaping the prompt, i.e. prompt engineering, by inserting the appropriate pragmatics (phrasing), semantics (vocabulary), syntax, sentence structuring, and specialised rendering terminology (photorealistic, 4K, wide angle, etc.) is still very much an art form of trial and error, probabilistic in nature and yet an emerging knowledge area. In each of the two editions of the workshops (2023 and 2024), when GenAI was used, a 1 h session was devoted to prompt engineering for aligning the students with the basics of using these tools, before the students began their ideation work.

All the results were captured and arranged for a classroom presentation at the conclusion of each workshop edition using Miro, a collaborative visual boarding platform. Miro permits the simultaneous participation of all the students in the same virtual workspace; therefore, the students can observe and draw inferences from the content posted together with their colleagues. The coordinating teacher created a collaborative Miro classroom workspace that included all the students and all the teachers. The students could post their project development, and the teachers could review and leave their comments. The final project presentations were held in the same studio setting where the workshops were conducted, with care taken to maintain an informal yet professional ambience. Each student received 15 min to present the project development process and the final output by sharing the Miro board and compiling a PowerPoint presentation. A physical scaled model mockup was an optional output that students could build to support their concept design.

3.2 Workshop Design

The workshops were formatted as three-day engagements, with the first day used for theme introduction, technical research, visual research, and selecting three symbolic keywords, as well as semantic moodboarding for each of the three keywords. The second day was divided into two halves, beginning with theory-practical training on GenAI tools and analogue sketching targeted at shape coding the symbolic keywords through the moodboards. Then, GenAI deployment was followed for form exploration. The third day was again divided into two halves, beginning with form detailing and concluding with project communication and presentation. At the end of each of the first 2 days, the authors, in the capacity of supervising teachers, would review the progress of each project in a tabletop studio discussion format.

Given the short timeframe for the workshops, it was decided to adopt the already established double diamond as a reference design process model and focus all ideation and concept generation activity within the initial stages of “discover” and “define” steps of this model (Design Council, n.d.). This decision allowed the workshop to skip the challenges of implementing technical and manufacturing-related concerns typically associated with the later phases of any process model, i.e. “develop” and “deliver” phases in the case of the double diamond model. Thus, the workshop covered both divergent and convergent phases, concentrated in the initial portion of the process. Accordingly, the teachers shaped the student workflow to ensure that time was invested proportionally between divergent and convergent thinking across the three working days. The teachers monitored the time to ensure that the leading divergent portion consisted of collaborative brainstorming, research, and generating low-fidelity ideas through sketching, as well as exploring concepts through visual representations. The trailing convergent portion was dedicated to transforming idea explorations into high-fidelity conceptual renderings, scenario painting, and a concept communication plan.

The workshops were staged under consistent parameters to ensure uniformity of process and facilitate comparison of results between the various editions, with the following salient points:

  1. Teachers: Three workshop mentors possessing expertise between 10 and 30 years in product design academia and industry.

  2. Students: The class size varied between 8 and 16 second-year Master’s (MA) product design students equipped with four years of undergraduate design degree education but lacking any prior experience or contact with text-to-image or sketch-to-image GenAI software. A total of 36 product design master students participated in the 4 workshop editions.

  3. Infrastructure: Provision of a dedicated design studio room outfitted with a comprehensive array of manual and digital design tools for sketching, prototyping, 3D printing, and computer-aided development.

  4. Input: Presentation of a design brief shaped as a PowerPoint lecture outlining the current foundation theory on social robotics, problem setting, design challenge, thematic context, objectives, and expected outcome with a specific focus on crafting visual representations of concept designs of social robots appropriately shape coded with semantic keywords (Krippendorff & Butter, 1984; Krippendorff, 2005). In the workshops where GenAI was used, around a 1 h input session on prompt engineering was offered. The design brief is essential to any design project, facilitating and clarifying the scope, aims, and objectives of various parties involved in the development process (Phillips, 2004).

  5. Timeline: Each edition was structured in a 3-day schedule encompassing 12 h of teaching contact spread across three sessions of 4 h each, supplemented by approximately 20 h of autonomous student work. During the contact hours, fundamental concepts of shape coding, HRI, and intricacies of social robotics were discussed in group sessions. During the autonomous hours, the students conducted additional research and form exploration individually or in groups of three.

  6. Output: Twenty-seven social robot concept design proposals were presented in total. Each project was presented through a 15 min PowerPoint summarising the entire design process and supplemented by the collaborative Miro board, which presented the more elaborate version visualising the entire visual research, semantic moodboarding studies, shaped coding sketches, form studies, GenAI prompt engineering with highlights for successful and unsuccessful phrases, GenAI visual output, blending exercises and final design renders. The work was always developed individually, except for the fourth edition, where students worked in groups of three.

  7. Evaluation: The three authors (teachers) collectively judged the creative quality of each project and attributed verbal critiques (crit) for each presentation. The critiques were followed by a group discussion, during which students and teachers debated the final results. These group discussions enabled critical analysis, comparison, and constructive feedback on the diverse concept design strategies. No quantitative or qualitative grade was awarded; therefore, these workshops did not impact the students’ semester academic results. To make a judgement, the authors considered four factors:

    1. The appropriateness of the overall concept for the suitability towards the intended social robot subcategory (companion, healthcare assistive, educational, or general purpose domestic), considering the theoretical knowledge on social robotics shared at the beginning of the workshop;

    2. Quality of the process visualisation and evidence of the ideation and concept generation work on Miro and PowerPoint;

    3. Overall novelty of the proposed design concept in terms of innovation; and

    4. Resolution of the final proposal in terms of form, mechanics, HRI, detail design, scenario graphical quality, and presentation communication.

  8. Consent: The student’s consent to publish and share the output of their work was not an issue, since the guidelines of the HEI in which this activity occurred stipulated that all content generated within the premises remained the intellectual property of the institution and could be used for research and publication purposes. The data protection, personal identity, and privacy of the students were ensured by not sharing any photographs of the general classroom environment with visible faces in this reporting.

4 Findings and Results

4.1 Differentiating Organic and Synthetic Creativity in Ideation

Integrating computational tools of synthetic creativity like GenAI into design curricula is a relatively new development, and both educators and students are still navigating the opportunities and challenges associated with their practical implementation. The authors’ empirical observations indicate that students often need very well-developed written communication skills to interact effectively with GenAI tools to avoid suboptimal results and encountering frustration. Current classroom teaching and orientation of product design education in the European context, at least, is primarily calibrated towards stimulating and shaping organic creativity. The introduction of synthetic creativity through GenAI computational tools is still unmeasured and therefore largely not yet controlled or monitored.

Organic creativity must defend and transparently expose the logic and reasoning behind every creative decision implemented by the designer along the entire ideation phase of the overall design process, while arriving at a specific conceptual proposal (Figure 2). On the contrary, text to image GenAI tools at the moment of developing this research do not expose the logic and reasoning behind the 2D images and designs they generate from a prompt, such tools are still very much a black box tool in the overall picture of a design process in the minds of many in the design community (Figure 3).

Figure 2 
                  Project Cuteo (pet robot) – Miro board demonstrating the overview of the classical methods workflow from the first workshop (2021) edition.
Figure 2

Project Cuteo (pet robot) – Miro board demonstrating the overview of the classical methods workflow from the first workshop (2021) edition.

Figure 3 
                  Project Vivo (educational robot) – Miro board demonstrating the overview of GenAI computational methods workflow and the resulting pure synthetic ideation using Midjourney from the third workshop (2023) edition. The analysis of text prompts and the resulting images they generate can be seen, with yellow highlights indicating text prompts aligned with the target semantic shape codes and red highlights indicating text prompts deviating from the target semantic shape codes.
Figure 3

Project Vivo (educational robot) – Miro board demonstrating the overview of GenAI computational methods workflow and the resulting pure synthetic ideation using Midjourney from the third workshop (2023) edition. The analysis of text prompts and the resulting images they generate can be seen, with yellow highlights indicating text prompts aligned with the target semantic shape codes and red highlights indicating text prompts deviating from the target semantic shape codes.

Speaking in terms of ideation and concept generation, where this research focuses, the stark differentiation between expressing organic and synthetic creativity causes disharmony in the process workflow of the students. When the students attempt to merge the GenAI synthetic creativity based on black box training data with their spontaneous organic creativity based on analogue sketching and visualisation work, a cognitive dissonance occurs. Students must transition from evidencing their thinking from lines, sketches, and form composition explorations into written words and precisely imagined descriptive word (adjective) details. Lines and sketches are drawing-based visual artefacts on which product design students receive a strong training base, while a prompt is a written language-based artefact based on reading and writing skills, on which design students are not usually well trained.

In traditional methods based on organic creativity, the initial visual exploration of ideation often begins with low-fidelity doodling or rapid sketches when the designer still has no clear vision of the final solution and is actively trying to concretise that vision by drawing. However, in the case of the computational methods based on GenAI synthetic creativity, the initial visual exploration required the designer to already hold some concrete vision in terms of shape, geometry, proportion, and context to generate valuable content.

4.1.1 Prompt Engineering

Figure 1 evidences that in the very beginning, designers input a more generic prompt containing only the product category and context, hoping to explore the baseline creativity of the GenAI tool (Midjourney) and launch the ideation. Figure 3 evidences that in more advanced stages of ideation, the prompts are more precise, containing indications of the product category, intended geometry, semantic character, material, colour, and even the intellectual capability of the social robot.

The third edition (2023) of the workshop was the first time GenAI that was introduced into the process (Figure 3). The third edition experimented primarily with Midjourney and occasionally Dall.E, while the fourth edition (2024) experimented with Midjourney and Vizcom. In the initial stages of first contact with GenAI, many students needed help with efficient prompt structuring to get the desired output quality. The authors observed that many of the students, rather than using descriptive words (adjectives) and shape code specifications while crafting prompts, instead tended to converse with the software (Figure 1). This approach does not leverage the strengths of GenAI tools, which respond better to prompts crafted clearly stating the product category, desired shape code details, desired character adjective to be attributed, colour-finish-material specifications, and the desired scenario of deployment (Figure 3).

Recognising the importance of effective prompting through prompt engineering skills to achieve any form of quality generation, the 2023 and 2024 editions of the workshop were attentive to teaching students how to craft compelling and appropriate text phrases that leverage the capabilities of GenAI for product design-oriented output. The workshop program included dedicated sessions of around 1 h on prompt structuring on the second day before students launched into their GenAI exploratory work, providing students with a framework for understanding the critical components of effective prompts, strategies for form iterating and refining their prompts based on the AI’s output.

Removing articles (a, the, it, etc.), connecting words (and, therefore, etc.), and conversational expressions (please, etc.) only adds noise to the prompt without contributing to tightening the focus around desired qualities of the results. Though due to the opaque and probabilistic nature of these GenAI tools, getting to the desired output was not predictable. This condition was far from the classical mechanics of traditional methods like drawing and iterative rapid sketching, where the design student can identify a strategy of intervention with pinpoint accuracy and implement a targeted intervention to improve or alter a specific detail. As the workshop progressed, the participating teachers (authors) and students collectively realised that the limitations of the training data, due to the emerging product quality of social robots, may be conditioning the creative quality of the output and possibly causing a design fixation phenomenon to occur.

This observation confirms the black box nature of current generation GenAI systems, thereby leaving designers in a trial and error and hit and miss dynamic, drastically reducing the control on form generation that the designer can exercise, which may trigger frustration in novice designers and beginner-level users of GenAI tools. By contrast, in a classical form generation workflow during ideation, the designer is very intentional and mindful while adding every line to a sketch, exercising total control of the final output. This loss of control and the resulting confrontation with many form proposals that deviated from the designer’s intention, while may trigger frustration, may alternatively at other times no doubt contribute to changing the designer’s train of thought and is probably the phenomenon cited in current literature as throwing “random visual stimuli” (Figoli et al., 2022).

A serendipitously presented random visual stimulus may opportunistically trigger an ideation moment to occur, which may birth an entirely novel or innovative concept. This is a good example of divergent thinking occurring during a design process.

4.2 Semantic Moodboards to Shape Coding Through Organic vs Synthetic Creativity

Once the students narrowed their problem definition to a specific pain point that could be addressed through social robotics, they identified a product opportunity gap on which to focus their ideation. The next step required them to begin shape coding exercises to create a visual representation of their solution, which would eventually concretise into their concept proposal. Semantic moodboarding was selected as the method to teach shape coding skills in this workshop series and therefore became the central departure point for all students as they launched their concept design journey. A semantic moodboard visually represents a target symbolic adjective (keyword) through visual details of products drawn across various categories. The challenge to students was that the morphology and details of their social robot must transmit any visual adjective they select, i.e. friendly, trustworthy, and cute.

Transitioning the semantic moodboard into form proposals is thus a critical step. Each student had to select three symbolic adjectives to attribute as target characters for their respective social robot. The products to insert into the moodboard were selected after careful consideration, filtering only details of products, avoiding any poetic compositions, abstract art, natural organisms, or objects. The background was kept neutral in a solid colour devoid of any foreign objects. The retention of colour in the selected images was allowed. Figures 4 and 5 demonstrate the variation in the creative paths between traditional and GenAI approaches, which the novice designers typically adopted, to arrive at a concept design proposal.

Figure 4 
                  Project Cuteo (pet robot) – Semantic moodboard to concept design demonstrating steps of concept generation through organic creativity. (Clockwise, top left) Moodboarding of three target symbolic adjectives for character attribution – cheerful, affectionate, helpful. (Top right) Classical ideation method using analogue sketching. (Bottom right) Scenario representation with Photoshop. (Bottom left) 3D modelling to 3D rendering, presenting the final concept design proposal.
Figure 4

Project Cuteo (pet robot) – Semantic moodboard to concept design demonstrating steps of concept generation through organic creativity. (Clockwise, top left) Moodboarding of three target symbolic adjectives for character attribution – cheerful, affectionate, helpful. (Top right) Classical ideation method using analogue sketching. (Bottom right) Scenario representation with Photoshop. (Bottom left) 3D modelling to 3D rendering, presenting the final concept design proposal.

Figure 5 
                  Project Vivo (educational robot) –semantic moodboarding to concept design with GenAI demonstrating steps of concept generation through mixed methods of blended synthetic and organic co-creation. (Clockwise, top left) The semantic moodboard I created organically attributes the symbolic adjective – “interactive”. (Top right) The second moodboard II created synthetically with Midjourney, experimenting blending combinations of two images from the first moodboard I and deriving a series of synthetic abstract forms. (Middle right) The third moodboard III created with Midjourney, blending the first moodboard I with a tripod-shaped web camera to explore functionality. (Bottom right) The fourth moodboard IV created with Midjourney, blending an exemplar social robot generated through Midjourney, as shown in Figure 3, bottom left, with the web camera used in moodboard III. (Bottom left) Organic sketches refining the form proposed in moodboard IV. (Middle left) Final 3D renders resulting from a 3D model built by the design student in Rhino. This process is an example of co-creation with GenAI, combining organic and synthetic creativity.
Figure 5

Project Vivo (educational robot) –semantic moodboarding to concept design with GenAI demonstrating steps of concept generation through mixed methods of blended synthetic and organic co-creation. (Clockwise, top left) The semantic moodboard I created organically attributes the symbolic adjective – “interactive”. (Top right) The second moodboard II created synthetically with Midjourney, experimenting blending combinations of two images from the first moodboard I and deriving a series of synthetic abstract forms. (Middle right) The third moodboard III created with Midjourney, blending the first moodboard I with a tripod-shaped web camera to explore functionality. (Bottom right) The fourth moodboard IV created with Midjourney, blending an exemplar social robot generated through Midjourney, as shown in Figure 3, bottom left, with the web camera used in moodboard III. (Bottom left) Organic sketches refining the form proposed in moodboard IV. (Middle left) Final 3D renders resulting from a 3D model built by the design student in Rhino. This process is an example of co-creation with GenAI, combining organic and synthetic creativity.

In the traditional path of organic creativity based on classical methods represented in Figure 4, the design student picks specific details from exemplar artefacts of the moodboard and tries to transform the visual geometry into a conceptual product form using extensive sketching.

In the computational path of pure synthetic creativity based on GenAI, represented in Figure 3, traditional drawing-based creativity has no input. In this condition, a designer has no opportunity to leverage their drawing and 3D modelling training for achieving high-quality outcomes, but instead must master written text prompt inputs, which now hold a vital position in achieving visual mastery of concept form proposals.

For novice designers, another significant challenge with pure synthetic creativity is that the direct link between the moodboard and drawing is broken, as no detail is lifted from the moodboard for transformation into a form exploration sketch.

4.2.1 Blending

Blending is a hybrid approach leveraging the strengths of traditional creativity and computational creativity to augment the ideation work.

Blending involves selecting sketches, images, or concepts from mood boards and merging them with proposals generated by GenAI tools such as DALL-E (Liu, Vermeulen, Fitzmaurice, & Matejka, 2023) and Midjourney. The process begins with the designer inputting specific prompts into the AI platform, producing visualisations that can be combined with the initial human-generated concepts. Vizcom goes a step further and allows the designer to input their original sketch supplemented with a text prompt as a starting point for GenAI images in almost real-time rendering, thus leaning more heavily on the blending technique to achieve the final output.

Blending offers several advantages, stimulating creative thinking, reducing the time spent on exploratory sketching by allowing the GenAI to fill in the creative gaps, and the ability to overcome creative blocks by increasing visual stimuli.

Figure 5 shows blending, which is a mixed path of synthetic and organic creativity merging GenAI with analogue sketching, giving a designer an opportunity to leverage their traditional drawing-based training to shape the output.

Figure 1 demonstrates a synthetic condition of blending where, instead of using their sketch, the design student merged a GenAI-developed image with another digital image of a product extracted from the moodboard but originating from the internet.

The teachers did not artificially force the adoption of blending to not influence the impact of its adoption on the students’ ideation workflow. As the workshop progressed, the students increased their comfort levels with the GenAI software; they organically discovered that pictures and sketches could also be used as prompting aids apart from written text. This discovery triggered a phase of blending (Figure 5) in ideation, incorporating visual elements alongside textual prompts, thereby allowing the students to indicate their design intent more effectively to the GenAI system and increasing overall control over the process. The authors observed a significant qualitative improvement in the output, enhancing the form proposals in terms of novelty and functionality. This multimodal approach to prompting was a pivotal moment in ideation for many students, helping them generate more relevant and diverse visual concepts (Hutson & Cotroneo, 2023) that appeared both more original and suitable to the intended purpose, i.e. more creative.

The authors present this blended approach as co-creation with GenAI, merging organic and synthetic creativity, which may be an intriguing possibility for increasing the productivity of novice designers in the ideation phase of the design process. In such a dynamic, the GenAI tools may serve as a reasonable force multiplier to boost the output quantity and quality of students who are weaker in traditional design tools and techniques and thus serve to level the playing field inside a classroom setting. The actual impact of this phenomenon has yet to be studied thoroughly and is likely a subject of further research.

4.3 Limitations of GenAI Synthetic Creativity

The authors observed that as the workshop progressed after several cycles of generation, a precise dilution of form diversity started to appear on the Midjourney platform. Although the output of the generated form proposals was quantitatively immense, the same could not be said of the qualitative diversity (Figure 3). Midjourney is a text prompt-based system that does not give the designer much control over the final output apart from selecting the most relevant generated and running another cycle of generations based on the selected proposal. This black box approach can derail the ideation, often distracting novice designers. Blending can increase the level of intervention.

4.3.1 Insights from Student Projects

Upon comparing the results obtained between ideation occurring from pure synthetic creativity based on GenAI prompting (Figure 3) and blended creativity merging synthetic and organic co-creation methods (Figure 5) of moodboarding, drawing, 3D modelling, and GenAI prompting, one can see the qualitative difference in form resolution, conceptual sophistication, and consideration for HRI.

Pure synthetic ideation suffers from design fixation, where a very narrow interpretation of what a social robot form could be is possible. For novice designers not trained to spot design fixation, they may get distracted or convinced of the limited visual vocabulary of the GenAI. Figure 3 illustrates that even after several cycles of prompting, the final results are repetitive, indicating a circular workflow with no evident progress.

This is the case of text-to-image GenAI like Midjourney, while in the case of sketch-to-image GenAI like Vizcom (Figure 6), the issue of design fixation is not such an immediate concern. This is because Vizcom was more custom designed for product design applications, and the interaction mechanics are based on the insertion of an original sketch or drawing from the human designer, which is then modified or completely replaced with a new synthetic image. The human designer always maintains control over how much interference Vizcom is allowed; therefore, they can actively intervene to limit the creep of design fixation.

Figure 6 
                     Project Hug (general-purpose assistive robot): Miro board demonstrating overview of mixed methods workflow from the fourth workshop edition (2024) based on drawing, paper prototyping, and Vizcom GenAI. (Clockwise, top left) Ideation with sketches and paper prototyping. (Top right) Sketch to GenAI transition with Vizcom by merging the sketch with synthetic rendering emerging from prompts (refer Figure 1). (Bottom right) fine tuning the concept design by tweaking the prompts and experimenting altering details. (Bottom left) The final concept design proposal rendered directly in Vizcom. In Vizcom, the transition from hand sketching to a high-fidelity 3D visualisation is much more fluid, and the boundary between the organic and synthetic creativity is more porous. The human designer has much more control.
Figure 6

Project Hug (general-purpose assistive robot): Miro board demonstrating overview of mixed methods workflow from the fourth workshop edition (2024) based on drawing, paper prototyping, and Vizcom GenAI. (Clockwise, top left) Ideation with sketches and paper prototyping. (Top right) Sketch to GenAI transition with Vizcom by merging the sketch with synthetic rendering emerging from prompts (refer Figure 1). (Bottom right) fine tuning the concept design by tweaking the prompts and experimenting altering details. (Bottom left) The final concept design proposal rendered directly in Vizcom. In Vizcom, the transition from hand sketching to a high-fidelity 3D visualisation is much more fluid, and the boundary between the organic and synthetic creativity is more porous. The human designer has much more control.

In contrast, the blended creativity approach breaks the design fixation pitfall. Figure 5 evidences how inserting classical methods can recover the ideation once a designer detects repetitive or stereotypical proposals from GenAI. In Figure 5, it is possible to see the implementation of abductive thinking by the design student in the step of exploring moodboard III, when the decision to insert a tripod webcam morphology into the form generation mix is evidenced. The designer in this case is considering issues related to the stabilisation and deployment of the robotic body, with the possibility of reducing the volumetric footprint when not in use. The workflow in the case of blended creativity is much more linear, approximating a classical pattern of ideation. GenAI application in this scenario is dosed and carefully applied only in specific moments of ideation, typically in early stages of ideation, then the application of classical methods of drawing are relied upon to derive a well resolved, unique and innovative robotic concept breaking the stereotypical image of social robots.

4.3.2 GenAI Implementation Strategies for Classroom Teaching

When training novice designers to use GenAI for product design, the authors advise that teachers monitor their workflow closely to ensure they do not abandon sketching and drawing and use GenAI only to broaden the exploration canvas. The form generations proposed by GenAI during blended moodboarding change the direction of the design students’ thinking. Surely, some of the synthetic shapes would not have occurred organically if the design students had never inserted GenAI into their ideation. The influence of synthetic creativity is visible in the final concept design proposal.

Integrating GenAI in product design education influences student engagement and the quality and quantity of their outputs (Figures 2 and 3). Using GenAI tools like Midjourney, DALL-E, and Vizcom can lead to varying levels of reliance on GenAI-generated content among students. Some students may become overly dependent on GenAI, resulting in a superficial exploration of design concepts. In contrast, others adeptly use such GenAI tools to complement and enhance their traditional design practice skills to improve their craft. These observations are an empirical confirmation aligning with present literature (Doshi & Hauser, 2024) discussed in Section 2, where it was mentioned that creators exhibiting weak creativity are the ones who lean on and benefit the most from GenAI tools, while those with stronger creativity do not benefit or degrade their output quality with GenAI. This disparity in the impact of GenAI amongst students also underscores the importance of educators’ role in guiding students in finding a balanced approach, ensuring that GenAI is a tool for creative augmentation rather than a substitution. This issue necessitates further work to deepen this research dimension presented here.

The 2024 workshop edition adopted the deployment of Vizcom in addition to Midjourney, and the authors noticed the adoption rate of this software was higher due to its low barrier to entry and gentler learning curve. By tailoring the UI features and functionality to the specific needs of product designers, Vizcom has made it easier for students to generate relevant and high-quality visual concepts, thereby enhancing the potential for co-creating with AI-assisted ideation in the design process. Vizcom merges text prompts and human-generated sketching to achieve the final output with the possibility of managing the influence of the GenAI on the generated proposal (Figure 6). Thus, the human-generated original content assumes a central role in the generated image. The authors assess that Vizom lowers the barrier of organic to synthetic creativity or rather makes it porous, thus allowing the designer to retain a sense of control over the form generation output, making the black box more transparent.

To bridge the gap between organic and synthetic form generation, especially to overcome design fixation, the teachers promoted applying blending techniques, which refer to merging AI-generated forms with human-generated forms.

It is important to alert novice designers inside a teaching scenario that images generated by current-generation GenAI tools will always require postproduction work in photoshop, sketchbook, or other image-altering software to achieve high-quality results. Over-reliance on GenAI can atrophy the critical analysis skills of novice designers, and the authors propose that blending can help maintain those skills by encouraging the designer to participate with traditional methods to intervene, alter, and improve the content generated by the GenAI.

While GenAI tools can render a wide range of shapes and details in photorealism, the ultimate demonstration of the creative quality of the concept proposals depends on the designer’s ability to integrate and refine these AI-generated forms effectively into the project’s problem setting and intended outcome. Here, the designer’s creative craft comes into play by undertaking critical evaluation and iterative development, which are crucial to transforming initial ideation into viable and coherent concepts. The designer must demonstrate careful consideration of the feasibility of AI-generated proposals and maintain a balanced influence between synthetic and organic input; otherwise, the ideation path can quickly get derailed.

Under such conditions, GenAI provides a path to innovation by assisting novice designers in achieving intricate and unconventional form geometries that they may not have otherwise considered due to their limitations in drawing skills. The blending technique positively impacts the diversity and quality of concept design output, overcoming some of the previously mentioned GenAI systems’ limitations.

5 Discussion

This is a position paper that locates the authors’ opinion on the evolving discussion regarding GenAI and product design. The opinion was shaped through classroom teaching, using case studies based on student projects in a university setting. The body of work presented in this research allowed the authors to gain insight into the popular burning question: will product design professional practice end with the eventual advancement of GenAI technologies (Hernández-Ramírez & Ferreira, 2024)?

GenAI is a rapidly progressing industry, and as of the moment of writing this research, ever more potent developments are constantly being presented. Although the literature on this subject is still shaping up, it is too early to tell. Some have argued that GenAI cannot replace human agency as GenAI does not understand the significance of the output it produces (Hicks et al., 2024) and only operates on a probabilistic system based on tokenising the input text. Current debates from industry experts like Ermira Murati, Senior Vice President of Research and Product at OpenAI, have weighed in that only low level and weak human creativity will be replaced by GenAI (Dartmouth Engineering, 2024), which also aligns with the inferences drawn from current literature (Doshi & Hauser, 2024) analysed in this study that human creators with weak creativity are the ones whose output quality benefits the most with access to AI.

In the case of the research presented in this article, all the students participating in the workshops were given unhindered access to GenAI systems, and all of them used these tools extensively, regardless of skill or creative potential. However, the true benefit of unhindered access to GenAI as a classroom training tool is somewhat questionable when comparing the qualitative output between the editions of the workshop without and with access to GenAI. As reported in the section on findings and results, there is an issue of design fixation using GenAI for robotics design, due to the emergent nature of robotics design in the humanities field; therefore, the limited training data may be conditioning the output negatively. Implementing GenAI in designing well-established classical product categories of low to medium complexities, such as furniture and appliances, in a similar classroom setting may negate or validate this claim. This issue needs further investigation by the authors and can be the subject of more in-depth future studies.

On the question of GenAI built on LLM technology, not intending the significance of its output, text-to-text GenAI systems like ChatGPT have been labelled as bullshit machines. The authors can argue that by extension, even text-to-image GenAI built on similar technology can be similarly questioned based on the output witnessed during this research (Figure 7). In the two workshop editions using GenAI, many generated proposals were unfeasible from the perspective of functionality and manufacturability. This confirms the argument that GenAI is only appropriate for implementation for initial ideation and concept design proposals. In terms of the Double Diamond process model, GenAI is not suitable to implement in the “Develop” and “Deliver” phases when engineering and manufacturing details need to be fixed in a product development task.

Figure 7 
               Project Lampi (educational robot), from the third workshop edition (2023). (Clockwise, top left) Prompt generated and blended exploration on Midjourney. (Top right) Sketch development of shape-coded forms containing character attribution of communicative, calm, and comprehensive based on blended results and followed by manual 3D modelling with Rhino to resolve HRI and functionality. (Bottom right) Scenario painting to explain HRI and technical resolution based on Rhino 3D modelling. (Bottom left) Midjourney generated blended results by merging two objects extracted from the semantic moodboard, which contained proposals not feasible for technical development and manufacturing.
Figure 7

Project Lampi (educational robot), from the third workshop edition (2023). (Clockwise, top left) Prompt generated and blended exploration on Midjourney. (Top right) Sketch development of shape-coded forms containing character attribution of communicative, calm, and comprehensive based on blended results and followed by manual 3D modelling with Rhino to resolve HRI and functionality. (Bottom right) Scenario painting to explain HRI and technical resolution based on Rhino 3D modelling. (Bottom left) Midjourney generated blended results by merging two objects extracted from the semantic moodboard, which contained proposals not feasible for technical development and manufacturing.

Adopting the mixed methods approach of blending, which merges GenAI synthetic creativity with organic creativity based on manual sketching, i.e. blending, can be a possible solution to overcome this limitation of GenAI. Figure 7 (bottom left) shows that pure synthetic creativity of GenAI does not achieve feasible forms. Implementing manual intervention (top left and top right) through blending, sketching, and 3D modelling can take the best pieces from GenAI proposals and inject critical analysis plus abductive creativity of the human designer to achieve the best results.

5.1 Necessity to Train Critical Analysis for Leveraging GenAI in Future

The emergence of “reasoning” model LLMs such as DeepSeek R-1 and OpenAI o3 in 2025, which allegedly can undertake critical analysis and think more like a human breaking down a complex problem into simpler constituent components (Woodle, 2025), could potentially offer a qualitatively different kind of output once they are deployed into creative applications such as art and design. A significant potential disruption in a product designer’s ideation workflow could occur if reasoning models could be presented with a problem-setting scenario in textual format, and based on which they could generate divergently original and practically suited visual proposals. The designer would then adopt the role of a critical thinker needing to dispassionately evaluate whether the reasoning AI synthetic proposals are superior to their own organic proposals.

Therefore, the authors assess that a critical analysis of AI-generated outputs needs to become a fundamental aspect of studio-based product design education. Designers must rigorously scrutinise GenAI outputs to avoid being misled by visually appealing, yet impractical designs produced. This evaluative task necessitates thoroughly analysing AI-generated proposals for feasibility and intended functionality. The critical judgement ability to evaluate hedonistic aesthetic attractiveness and pragmatic design viability will become an even more fundamental product design skill after the mass adoption of GenAI. It will underpin the relevance of maintaining a human product designer throughout the design process, especially within the iterative ideation phase. This condition will ensure that GenAI functions as a supplementary enhancement to the human designer’s skill set rather than a substitute.

Educators must also engage in ongoing professional development to stay abreast of GenAI technologies, given the rapid pace of development. By fostering a culture of knowledge-sharing among higher education institutions, educators can collaboratively work towards developing robust classroom teaching frameworks that effectively integrate Gen AI tools into the twenty first-century design education curriculum.

Finally, the ethical question of AI alignment and addressing potential biases in GenAI imagery, which may impact the perception and mental models of often young novice designers in formation, will be ever more important as this technology becomes widely adopted.

6 Conclusion

In this research, the authors contrasted two approaches to ideation: a traditional method based on organic creativity and a mixed methods approach based on blended organic and GenAI creativity.

The traditional ideation method based on classical tools demonstrated needing strong creativity skills of abductive thinking, sketching, and 3D modelling to be able to transfer a semantic moodboard into conceptual designs of social robots. Due to the effort required to generate each proposal and the abductive thinking involved, the volume of ideation was comparatively low.

A pure synthetic computational ideation method based on GenAI tools did not necessarily need students to possess strong creativity to achieve conceptual design proposals, but the results suffered from design fixation, possibly due to limitations of training data on emerging product categories such as social robots.

The blended ideation method, which this research presented, was demonstrated to be an effective strategy to break the hold of design fixation and achieve feasible concept designs of social robots. This observation confirms that classical product design training is still relevant in ideation even when working with GenAI technologies. In any case, with blending GenAI acted as a catalyst to help students overcome the initial creative block by offering many proposals in a short amount of time, thereby serving as visual stimuli.

Though GenAI generated a lot of proposals, they lacked aesthetic diversity and functional viability. The GenAI appeared to focus more on hedonistic aesthetic appeal, neglecting pragmatic considerations. Many of the proposals generated by GenAI systems reinforced stereotypical imagery of robotics. The capability to perform critical analysis of generated design is totally lacking in current-generation GenAI tools. Since social robotics is an emerging novel product category, and roboticists are still resolving the issue of form and aesthetics; therefore, high-quality creative training data may still be lacking.

With these limitations in mind, critical analysis and abductive thinking skills are areas where human product designers should be robustly trained on, to guarantee the necessity of human presence in the ideation work as the industry progressively deploys more GenAI for creative work. This should be the primary focus of training for future product designers, aiming to eliminate redundant and infeasible concepts that emerge and infiltrate the design process in the era of AI.

In conclusion, text-to-image and sketch-to-image GenAI tools are only suitable for use in early ideation and concept design generation and are not suitable for engineering and technical design resolution. Therefore, GenAI is currently not possible to expect any innovation in terms of design-driven innovation to occur from GenAI beyond aesthetical innovation (Rampino, 2011). Blending is an effective method for designers to leverage this strength of GenAI while exploring form and aesthetics in early-stage ideation, but must dose their application episodically during appropriate moments in ideation and concept generation while deploying their critical analysis and classical design skills to eliminate unfeasible proposals. In a classroom teaching scenario, the authors assess that deploying GenAI tools can open the mind of the design student to realise the vast possibilities that exist before converging on a definitive proposal. Therefore, GenAI can catalyse creativity and divergent thinking in novice product designers as far as form exploration is concerned.

Acknowledgments

GenAI tools, including OpenAI’s GPT-4o and Claude 3 Opus, were utilised to assist in drafting and refining the text presented in this research work. The authors have reviewed and edited the content to ensure accuracy and originality to the best of their ability.

  1. Funding information: This work is financed by national funds through FCT – Foundation for Science and Technology, I.P., within the scope of funding reference: UIDB/05237/Esad Idea – Association for the Promotion of Research in Design and Art, and funding reference UID/04057: Research Institute for Design, Media and Culture.

  2. Author contributions: This research is a collaborative effort of all the authors in terms of the conceptualisation, research, and writing of this article.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: All the data collected and shared within this research are intellectual property of the authors. The students whose projects are cited within this work grant consent to share their work within the scope of academic research performed by their teachers by being part of the teaching activities occurring within the confines of the HEIs with which the authors are affiliated.

References

Boschetti, R. (2014). Could robots become too cute for comfort? BBC News. https://www.bbc.com/news/technology-29737539.Search in Google Scholar

Brown, T. (2009). Change by design: How design thinking transforms organizations and inspires innovation. New York: HarperBusiness.Search in Google Scholar

Causo, A., Vo, G. T., Chen, I. M., & Yeo, S. H. (2016). Design of robots used as education companion and tutor. In Robotics and Mechatronics: Proceedings of the 4th IFToMM International Symposium on Robotics and Mechatronics (pp. 75–84). Springer International Publishing.10.1007/978-3-319-22368-1_8Search in Google Scholar

Chiou, L. Y., Hung, P. K., Liang, R. H., & Wang, C. T. (2023). Designing with AI: An exploration of co-ideation with image generators. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (pp. 1941–1954).10.1145/3563657.3596001Search in Google Scholar

Claburn, T. (2024). Letting chatbots run robots ends as badly as you’d expect. The Register. https://www.theregister.com/2024/11/16/chatbots_run_robots/.Search in Google Scholar

DALL-E. (n.d.). Creating images from text. https://openai.com/index/dall-e/.Search in Google Scholar

Dartmouth Engineering. (2024). AI everywhere: Transforming our world, empowering humanity [Video File]. https://youtu.be/yUoj9B8OpR8?si=K136qfHxYQCuP3Vh&t=1768.Search in Google Scholar

David, D., Thérouanne, P., & Milhabet, I. (2022). The acceptability of social robots: A scoping review of the recent literature. Computers in Human Behavior, 137, 107419. 10.1016/j.chb.2022.107419.Search in Google Scholar

Design Council. (n.d.) The double diamond – Design council. https://www.designcouncil.org.uk/our-resources/the-double-diamond/.Search in Google Scholar

Dorst, K., & Cross, N. (2001). Creativity in the design process: Co-evolution of problem-solution. Design Studies, 22(5), 425–437.10.1016/S0142-694X(01)00009-6Search in Google Scholar

Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290.10.1126/sciadv.adn5290Search in Google Scholar

Figoli, F. A., Rampino, L., & Mattioli, F. (2022). AI in design idea development: A workshop on creativity and human-AI collaboration. In Proceedings of DRS (pp. 1–17).10.21606/drs.2022.414Search in Google Scholar

Harvey, S., & Berry, J. W. (2023). Toward a meta-theory of creativity forms: How novelty and usefulness shape creativity. Academy of Management Review, 48(3), 504–529.10.5465/amr.2020.0110Search in Google Scholar

Hegel, F. (2012). Effects of a robot’s aesthetic design on the attribution of social capabilities. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (pp. 469–475). IEEE.10.1109/ROMAN.2012.6343796Search in Google Scholar

Hernández-Ramírez, R., & Ferreira, J. B. (2024). The future end of design work: A critical overview of managerialism, generative AI, and the nature of knowledge work, and why craft remains relevant. She Ji: The Journal of Design, Economics, and Innovation, 10(4), 414–440.10.1016/j.sheji.2024.11.002Search in Google Scholar

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38.10.1007/s10676-024-09775-5Search in Google Scholar

Hoggenmueller, M., Lupetti, M. L., Van Der Maden, W., & Grace, K. (2023). Creative AI for HRI design explorations. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 40–50).10.1145/3568294.3580035Search in Google Scholar

Hutson, J., & Cotroneo, P. (2023). Generative AI tools in art education: Exploring prompt engineering and iterative processes for enhanced creativity. Metaverse. Special Issue: The Art in the Metaverse, 4(1), 1–14.10.54517/m.v4i1.2164Search in Google Scholar

Jansson, D. G., & Smith, S. M. (1991). Design fixation. Design Studies, 12(1), 3–11.10.1016/0142-694X(91)90003-FSearch in Google Scholar

Krippendorff, K. (2005). The semantic turn: A new foundation for design. Boca Raton: CRC Press.10.1201/9780203299951Search in Google Scholar

Krippendorff, K., & Butter, R. (1984). Product semantics: Exploring the symbolic qualities of form. Innovation, 3(2), 4–9.Search in Google Scholar

Kędzierski, J., Kczmareka, P., Dziergwa, M., & Tchoń, K. (2015). Design for a robotic companion. International Journal of Humanoid Robotics, 12(1), 1550007.10.1142/S0219843615500073Search in Google Scholar

Kulkarni, C., Druga, S., Chang, M., Fiannaca, A., Cai, C., & Terry, M. (2023). A word is worth a thousand pictures: Prompts as AI design material. arXiv preprint arXiv:2303.12647.Search in Google Scholar

Liu, V., Vermeulen, J., Fitzmaurice, G., & Matejka, J. (2023). 3DALL-E: Integrating text-to-image AI in 3D design workflows. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (pp. 1955–1977).10.1145/3563657.3596098Search in Google Scholar

Midjourney AI – Free image generator. (n.d.). https://midjourney.co/.Search in Google Scholar

Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.10.1109/MRA.2012.2192811Search in Google Scholar

Paananen, V., Oppenlaender, J., & Visuri, A. (2023). Using text-to-image generation for architectural design ideation. International Journal of Architectural Computing, 22(3), 458–474. doi: 10.1177/14780771231222783.Search in Google Scholar

Phillips, P. L. (2004). Creating the perfect design brief: How to manage design for strategic advantage. Skyhorse Publishing Inc.Search in Google Scholar

Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., & Sutskever, I. (2021). Zero-shot text-to-image generation. In International Conference on Machine Learning (pp. 8821–8831). PMLR.Search in Google Scholar

Rampino, L. (2011). The innovation pyramid: A categorization of the innovation phenomenon in the product-design field. International Journal of Design, 5(1), 3–16.Search in Google Scholar

Shukman, D. (2015). Being comfortable in robotics’ uncanny valley. BBC News. https://www.bbc.com/news/science-environment-32028539. Search in Google Scholar

Simeone, L., Mantelli, R., & Adamo, A. (2022). Pushing divergence and promoting convergence in a speculative design process: Considerations on the role of AI as a co-creation partner. In D. Lockton, S. Lenzi, P. Hekkert, A. Oak, J. Sádaba, & P. Lloyd (Eds.), DRS2022: Bilbao. Bilbao, Spain. doi: 10.21606/drs.2022.197.Search in Google Scholar

Tovey, M., Porter, S., & Newman, R. (2003). Sketching, concept development and automotive design. Design Studies, 24(2), 135–153.10.1016/S0142-694X(02)00035-2Search in Google Scholar

Ulrich, K. T., & Eppinger, S. D. (2012). Product design and development (5th ed.). New York: McGraw-Hill/Irwin.Search in Google Scholar

Vizcom. (n.d.). Imagineart AI art generator | free AI image generator. Imagine. Art. https://www.imagine.art.Search in Google Scholar

Woodle, A. (2025). What are reasoning models and why you should care. HPC wire. https://www.hpcwire.com/2025/02/06/what-are-reasoning-models-and-why-you-should-care/.Search in Google Scholar

Zhang, C., Wang, W., Pangaro, P., Martelaro, N., & Byrne, D. (2023). Generative image AI using design sketches as input: Opportunities and challenges. In Proceedings of the 15th Conference on Creativity and Cognition (pp. 254–261).10.1145/3591196.3596820Search in Google Scholar

Received: 2024-08-02
Revised: 2025-04-21
Accepted: 2025-05-28
Published Online: 2025-08-16

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Special Issue: Disruptive Innovations in Education - Part II
  2. Formation of STEM Competencies of Future Teachers: Kazakhstani Experience
  3. Technology Experiences in Initial Teacher Education: A Systematic Review
  4. Ethnosocial-Based Differentiated Digital Learning Model to Enhance Nationalistic Insight
  5. Delimiting the Future in the Relationship Between AI and Photographic Pedagogy
  6. Research Articles
  7. Examining the Link: Resilience Interventions and Creativity Enhancement among Undergraduate Students
  8. The Use of Simulation in Self-Perception of Learning in Occupational Therapy Students
  9. Factors Influencing the Usage of Interactive Action Technologies in Mathematics Education: Insights from Hungarian Teachers’ ICT Usage Patterns
  10. Study on the Effect of Self-Monitoring Tasks on Improving Pronunciation of Foreign Learners of Korean in Blended Courses
  11. The Effect of the Flipped Classroom on Students’ Soft Skill Development: Quasi-Experimental Study
  12. The Impact of Perfectionism, Self-Efficacy, Academic Stress, and Workload on Academic Fatigue and Learning Achievement: Indonesian Perspectives
  13. Revealing the Power of Minds Online: Validating Instruments for Reflective Thinking, Self-Efficacy, and Self-Regulated Learning
  14. Culturing Participatory Culture to Promote Gen-Z EFL Learners’ Reading Proficiency: A New Horizon of TBRT with Web 2.0 Tools in Tertiary Level Education
  15. The Role of Meaningful Work, Work Engagement, and Strength Use in Enhancing Teachers’ Job Performance: A Case of Indonesian Teachers
  16. Goal Orientation and Interpersonal Relationships as Success Factors of Group Work
  17. A Study on the Cognition and Behaviour of Indonesian Academic Staff Towards the Concept of The United Nations Sustainable Development Goals
  18. The Role of Language in Shaping Communication Culture Among Students: A Comparative Study of Kazakh and Kyrgyz University Students
  19. Lecturer Support, Basic Psychological Need Satisfaction, and Statistics Anxiety in Undergraduate Students
  20. Parental Involvement as an Antidote to Student Dropout in Higher Education: Students’ Perceptions of Dropout Risk
  21. Enhancing Translation Skills among Moroccan Students at Cadi Ayyad University: Addressing Challenges Through Cooperative Work Procedures
  22. Socio-Professional Self-Determination of Students: Development of Innovative Approaches
  23. Exploring Poly-Universe in Teacher Education: Examples from STEAM Curricular Areas and Competences Developed
  24. Understanding the Factors Influencing the Number of Extracurricular Clubs in American High Schools
  25. Student Engagement and Academic Achievement in Adolescence: The Mediating Role of Psychosocial Development
  26. The Effects of Parental Involvement toward Pancasila Realization on Students and the Use of School Effectiveness as Mediator
  27. A Group Counseling Program Based on Cognitive-Behavioral Theory: Enhancing Self-Efficacy and Reducing Pessimism in Academically Challenged High School Students
  28. A Significant Reducing Misconception on Newton’s Law Under Purposive Scaffolding and Problem-Based Misconception Supported Modeling Instruction
  29. Product Ideation in the Age of Artificial Intelligence: Insights on Design Process Through Shape Coding Social Robots
  30. Navigating the Intersection of Teachers’ Beliefs, Challenges, and Pedagogical Practices in EMI Contexts in Thailand
  31. Business Incubation Platform to Increase Student Motivation in Creative Products and Entrepreneurship Courses in Vocational High Schools
  32. On the Use of Large Language Models for Improving Student and Staff Experience in Higher Education
  33. Coping Mechanisms Among High School Students With Divorced Parents and Their Impact on Learning Motivation
  34. Twenty-First Century Learning Technology Innovation: Teachers’ Perceptions of Gamification in Science Education in Elementary Schools
  35. Exploring Sociological Themes in Open Educational Resources: A Critical Pedagogy Perspective
  36. Teachers’ Emotions in Minority Primary Schools: The Role of Power and Status
  37. Investigating the Factors Influencing Teachers’ Intention to Use Chatbots in Primary Education in Greece
  38. Working Memory Dimensions and Their Interactions: A Structural Equation Analysis in Saudi Higher Education
  39. A Practice-Oriented Approach to Teaching Python Programming for University Students
  40. Reducing Fear of Negative Evaluation in EFL Speaking Through Telegram-Mediated Language Learning Strategies
  41. Demographic Variables and Engagement in Community Development Service: A Survey of an Online Cohort of National Youth Service Corps Members
  42. Educational Software to Strengthen Mathematical Skills in First-Year Higher Education Students
  43. The Impact of Artificial Intelligence on Fostering Student Creativity in Kazakhstan
  44. Review Articles
  45. Current Trends in Augmented Reality to Improve Senior High School Students’ Skills in Education 4.0: A Systematic Literature Review
  46. Exploring the Relationship Between Social–Emotional Learning and Cyberbullying: A Comprehensive Narrative Review
  47. Determining the Challenges and Future Opportunities in Vocational Education and Training in the UAE: A Systematic Literature Review
  48. Socially Interactive Approaches and Digital Technologies in Art Education: Developing Creative Thinking in Students During Art Classes
  49. Current Trends Virtual Reality to Enhance Skill Acquisition in Physical Education in Higher Education in the Twenty-First Century: A Systematic Review
  50. Understanding the Technological Innovations in Higher Education: Inclusivity, Equity, and Quality Toward Sustainable Development Goals
  51. Perceived Teacher Support and Academic Achievement in Higher Education: A Systematic Literature Review
  52. Mathematics Instruction as a Bridge for Elevating Students’ Financial Literacy: Insight from a Systematic Literature Review
  53. STEM as a Catalyst for Education 5.0 to Improve 21st Century Skills in College Students: A Literature Review
  54. A Systematic Review of Enterprise Risk Management on Higher Education Institutions’ Performance
  55. Case Study
  56. Contrasting Images of Private Universities
Downloaded on 12.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/edu-2025-0094/html
Scroll to top button