AI_deation: A Creative Knowledge Mining Method for Design Exploration

Authors

  • George Palamas Aalborg University Copenhagen, Copenhagen, Denmark
  • Alejandra Mesa Guerra Risvang Aalborg University Copenhagen
  • Liana-Dorina Møsbæk Aalborg University Copenhagen

DOI:

https://doi.org/10.4108/eetct.v9i3.2685

Keywords:

graphic design, visualization, design exploration, machine learning, gradient-based analysis, design theory, ideation

Abstract

Ideation is a core activity in the design process which begins with a design brief and results in a range of design concepts. However, due to its exploratory nature it is challenging to formalise computationally. Here, we report a creative knowledge mining method that combines design theory with a machine learning approach. This study begins by introducing a graphic design style classification model that acts as a model for the aesthetic evaluation of images. A Grad-CAM technique is used to visualise where our model is looking at in order to detect and interpret visual syntax, such as geometric influences and color gradients, to determine the most influential visual semiotics. Our comparative analysis on two Nordic design referents suggests that our approach can be efficiently used to support and motivate design exploration. Based on these findings, we discuss the prospects of machine vision aided design systems to envisage concepts and possible design paths, but also to support educational objectives.

References

Laing, S. and Masoodian, M. (2016) A study of the influ-ence of visual imagery on graphic design ideation. Design Studies 45: 187–209. doi:10.1016/j.destud.2016.04.002.

Bestley, R. and Noble, I. (2016) Visual Research: An Introduction to Research Methods in Graphic Design (London, United Kingdom: Bloomsbury Publishing), chap. 2, 3rd ed., 34–54.

Herring, S.R., Chang, C.C., Krantzler, J. and Bailey, B.P. (2009) Getting inspired! Understanding how and why examples are used in creative design practice. In Conference on Human Factors in Computing Systems - Pro-ceedings (New York, NY, USA: Association for Computing Machinery): 87–96. doi:10.1145/1518701.1518717.

Sio, U.N., Kotovsky, K. and Cagan, J. (2015) Fixation or inspiration? A meta-analytic review of the role of examples on design processes. Design Studies 39: 70–99. doi:10.1016/j.destud.2015.04.004.

Victionary [ed.] (2017) Truly Nordic: Nordic Craftsman-ship, Branding Campaigns and Design (North Point, Hong Kong: Victionary).

Kahneman, D. (2011) Thinking, Fast and Slow (New York: Farrar, Straus and Giroux).

Obeso, A.M., Benois-Pineau, J., Acosta, A.A.R. and Vázquez, M.S.G. (2016) Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features. Journal of Electronic Imaging 26(1): 011016. doi:10.1117/1.jei.26.1.011016.

Ng, H.W., Nguyen, V.D., Vonikakis, V. and Winkler, S. (2015) Deep learning for emotion recognition on small datasets using transfer learning. In ICMI 2015 - Proceedings of the 2015 ACM International Conference on Multimodal Interaction (New York, NY, USA: Association for Computing Machinery, Inc): 443–449. doi:10.1145/2818346.2830593.

Chu, W.T. and Guo, H.J. (2017) Movie genre classifi-cation based on poster images with deep neural net-works. In Proceedings of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes (Association for Computing Machinery): 39–45.

Wang, W., Zhao, M., Wang, L., Huang, J., Cai, C. and Xu, X. (2016) A multi-scene deep learning model for image aesthetic evaluation. Signal Processing: Image Communication 47: 511–518.

Li, J., Yang, J., Hertzmann, A., Zhang, J. and Xu, T.(2019) Layoutgan: Generating graphic layouts with wire-frame discriminators. arXiv preprint arXiv:1901.06767 .

Géron, A. (2017) Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems (Sebastopol, CA: O’Reilly Media, Inc.).

LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P.(1998) Gradient-based learning applied to document recognition. In Proceedings of the IEEE (IEEE): 2278 –2324. doi:10.1109/5.726791.

Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012) ImageNet Classification with Deep Convolutional Neural Networks. Tech. rep., NIPS. URL http://code.google. com/p/cuda-convnet/.

Agrawal, A., Lu, J., Antol, S., Mitchell, M., Zitnick, C. L., Parikh, D. and Batra, D. (2017) VQA: Visual Question Answering. Int. J. Comput. Vision 123(May 2017): 4–31. doi:10.1007/s11263-016-0966-6.

Vinyals, O., Toshev, A., Bengio, S. and Erhan, D. (2015) Show and Tell: A Neural Image Caption Generator. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Boston, MA: IEEE): 3156–3164. doi:10.1109/CVPR.2015.7298935.

Durand, T., Mordan, T., Thome, N. and Cord, M.(2017) WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation. Tech. rep., IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Shin, H.C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J. et al. (2016) Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Transactions on Medical Imaging 35(5): 1285–1298. doi:10.1109/TMI.2016.2528162.

Deng, J., Dong, W., Socher, R., Li, L., Kai Li and Li Fei-Fei (2009) Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (IEEE): 248–255.

Ng, H.W., Nguyen, V.D., Vonikakis, V. and Winkler,

S. (2015) Deep learning for emotion recognition on small datasets using transfer learning. In Proceedings of the 2015 ACM on international conference on multimodal interaction (Association for Computing Machinery): 443–449.

Raina, R., Battle, A., Lee, H., Packer, B. and Ng, A.Y. (2007) Self-taught Learning: Transfer Learning from Unlabeled Data. In Zoubin Ghahramani [ed.] ICML ’07: Proceedings of the 24th international conference on Machine learning (New York, NY, United States: Association for Computing Machinery): 759–766. doi:10.1145/1273496.

Pan, S.J. and Yang, Q. (2010), A survey on transfer learning. doi:10.1109/TKDE.2009.191.

Wang, R.W.Y. and Hsu, C.C. (2007) The method of graphic abstraction in visual metaphor. Visible Language 41(3): 266–279. URL https://search.proquest.com/docview/232933474?accountid=8144.

Hsu, C.C. and Wang, W.Y. (2018) Categorization and Features of Simplification Methods in Visual Design. Art and Design Review 6: 12–28. doi:10.4236/adr.2018.61002, URL http: //www.scirp.org/journal/adr.

Mouron, R. (2020), A.m cassandre by henri mouron. chapter 1: A new aesthetic of the poster. URL https: //www.cassandre-france.com/chapter-1.

Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. and Tor-ralba, A. (2016) Learning Deep Features for Discrimina-tive Localization. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV: IEEE): 2921–2929. doi:10.1109/CVPR.2016.319.

Oquab, M., Laptev, I. and Sivic, J. (2015) Is object localization for free?-Weakly-supervised learning with convolutional neural networks. In Is object localiza-tion for free?-Weakly-supervised learning with convolu-tional neural networks (Boston, MA: IEEE): 685–694. doi:10.1109/CVPR.2015.7298668.

Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D. and Batra, D. (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV) (IEEE): 618–626.

Chen, L., Wang, P., Dong, H., Shi, F., Han, J., Guo, Y., Childs, P.R. et al. (2019) An artificial intelligence based data-driven approach for design ideation. Journal of Visual Communication and Image Representation 61: 10–22.

Koch, J., Lucero, A., Hegemann, L. and Oulasvirta,

A. (2019) May ai? design ideation with cooperative contextual bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Association for Computing Machinery): 1–12.

Marimekko (2020), About marimekko. URL https://company.marimekko.com/en/about-marimekko/.

Marimekko (2013) Marimekko: In Patterns (San Fran-cisco, California: Chronicle Books).

Berg, M. (2020), Mads berg illustration - about. URL https://madsberg.dk/about.

Han, D., Liu, Q. and Fan, W. (2018) A new image classification method using CNN transfer learning and web data augmentation. Expert Systems with Applications 95: 43–56. doi:10.1016/j.eswa.2017.11.028.

Simonyan, K., Vedaldi, A. and Zisserman, A. (2013) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. CoRR abs/1312.6034. URL http://arxiv.org/abs/1312. 6034.

Yosinski, J., Clune, J., Bengio, Y. and Lipson, H. (2014) How transferable are features in deep neural networks?In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14 (Cambridge, MA, USA: MIT Press): 3320–3328.

Ruizendaal, R. (2020), Deep learning #3: More on cnns & handling overfitting. URL https://tinyurl.com/yccyreff.

Raimes, J. and Renow-Clarke, B. (2007) Retro graphics : a visual sourcebook to 100 years of graphic design (San Francisco, California: Chronicle Books).

Downloads

Published

23-11-2022

How to Cite

1.
Palamas G, Guerra Risvang AM, Møsbæk L-D. AI_deation: A Creative Knowledge Mining Method for Design Exploration. EAI Endorsed Trans Creat Tech [Internet]. 2022 Nov. 23 [cited 2024 Dec. 28];9(3):e5. Available from: https://publications.eai.eu/index.php/ct/article/view/2685