NVIDIA’s New AI: Nature Videos Will Never Be The Same

NVIDIA’s New AI: Nature Videos Will Never Be The Same

Time-lap­se image sequen­ces offer visual­ly com­pel­ling insights into dyna­mic pro­ces­ses that are too slow to obser­ve in real time. Howe­ver, play­ing a long time-lap­se sequence back as a video often results in dis­trac­ting fli­cker due to ran­dom effects, such as wea­ther, as well as cyclic effects, such as the day-night cycle. We intro­du­ce the pro­blem of disen­tang­ling time-lap­se sequen­ces in a way that allows sepa­ra­te, after-the-fact con­trol of over­all trends, cyclic effects, and ran­dom effects in the images, and descri­be a tech­ni­que based on data-dri­ven gene­ra­ti­ve models that achie­ves this goal. This enables us to “re-ren­der” the sequen­ces in ways that would not be pos­si­ble with the input images alo­ne. For exam­p­le, we can sta­bi­li­ze a long sequence to focus on plant growth over many months, under sel­ec­ta­ble, con­sis­tent weather. 

Our approach is based on Gene­ra­ti­ve Adver­sa­ri­al Net­works (GAN) that are con­di­tio­ned with the time coor­di­na­te of the time-lap­se sequence. Our archi­tec­tu­re and trai­ning pro­ce­du­re are desi­gned so that the net­works learn to model ran­dom varia­ti­ons, such as wea­ther, using the GAN’s latent space, and to disen­tang­le over­all trends and cyclic varia­ti­ons by fee­ding the con­di­tio­ning time label to the model using Fou­rier fea­tures with spe­ci­fic frequencies.

We show that our models are robust to defects in the trai­ning data, enab­ling us to amend some of the prac­ti­cal dif­fi­cul­ties in cap­tu­ring long time-lap­se sequen­ces, such as tem­po­ra­ry occlu­si­ons, uneven frame spa­cing, and miss­ing frames.

0
Interview with Sky360 Software Developer Lionel LeMaire GENERATING 3D MODELS FROM TEXT WITH NVIDIA’S MAGIC3D

Keine Kommentare

No comments yet

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert