Generative learning has been receiving much attention recently, mostly due to the capabilities in image and text generation. The basis of generative learning is data that follows a certain statistical distribution. Using this data, generative learning is trained on the task to sample from the statistical distribution and thus produce new samples that never were part of the training data set, but share all the statistically relevant features. The important point with generative learning is scalability: with sufficient data at hand, generative learning learns distributions as complex as faces of persons or even all sorts of arbitrary images of different types on the internet.
Especially given the last very general statistical distribution, narrowing in to certain sub-domains of images via (text based) conditioning is crucial. Conditional generative learning thereby is key to the actual applicability of the technology.
This article appeared in the January 2024 issue of BENCHMARK.
Reference | bm_jan_24_1 |
---|---|
Authors | Drygala. C Gottschalk. H Kruger. P di Mare. F Krebs. W Werdelmann. B |
Language | English |
Type | Magazine Article |
Date | 10th January 2024 |
Organisations | Technical University Berlin Ruhr University Bochum Siemens |
Region | Global |
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2024
Developed By Duo Web Design