Capturing Worlds with CitySynth | Inside Unreal



Watch a world form before your eyes this week as we speak with the Co-Founder of YDrive about their incredible project CitySynth! (You may know them for their work on EasySynth.) This crazy tech enables anyone to capture real world environments by using just their phones, then it’ll automatically match Megascan textures and use machine learning to produce UE assets. Come prepared with your questions for the live Q&A!

ANNOUNCEMENT POST:
https://forums.unrealengine.com/t/inside-unreal-capturing-worlds-with-citysynth/586960

TRANSCRIPT:
https://epicgames.box.com/s/52ih0o9r88gscycag2ljp4hk9kkh39q2

source

30 thoughts on “Capturing Worlds with CitySynth | Inside Unreal”

  1. To the team: Would it be possible to input multiple recordings.. for example have a ground level or car level input video, then another from slightly above, to cover the top part using a drone and maybe even one more input from the opposite side.. to get much of a scene from all angles? And have all of that combined… it would have more data of the same scene

    Reply
  2. What is the name of the app that was used when you were inside the car? Is that something you all are still working on for a release? I have been working on my project for two years. I would love to test the app… if and when in beta.

    Reply
  3. The most amazing thing was at 38:33 Here is my project to the devs that are working on this: If you ever see the movies DEJAVU with
    Denzel Washington.. and also Minority report. This project reminds me of the tech that was available in those movies. Back to my project. In 2007, I lost someone close to me. And ever since then I was forced to move on with my life. I got married, have children. My kids are young adults now. However what happened in 2007 has remained with me daily. There isn’t a single day that my memory won’t play back the memories. With that in mind. I decided to make a documentary. I turned to cgi. Started learning blender. Started playing around with ripping 3D map models from google maps and imported that into blender. I want to recreate something. However ripping those low quality meshes and textures from google earth/maps is not the problem. The problem is the low quality in both mesh and textures. Unless my scene is recorded from say drone altitude.. the meshes really look bad and can’t be used in a documentary I’m trying to create. This right here is the Dream I have been waiting for. Hope this helps the devs push forward.. as seeing what a huge impact this has on us the users.

    Reply
  4. The custom rendering pipleline with machine learning is the future for photogametry like this. Just wow. Yes! So excited for this. That scene you showed is next level stuff.. oh my. It kind of reminds me of when ai is trying to solve from missing data. Well it takes a guess. And when we see material added from nothing, because there was no data to extract it from… ML add that missing data from a data set, it was trained on…

    Reply
  5. This looks beautiful.. I'm so excited for the baseline that's presented here.
    I would like the process to be iterative, so that we can overlay the other angles, and mutliple views, to remove the gaps… with that Iteration, I could easily turn this into a finished product for my purposes, with extremely low effort.

    Reply

Leave a Comment