PROGRAMMING (Engine, API, Hardware, etc)

  • geoffbirch

    Posts: 9

    Posted:
    Posted:
    [hide]

    Hey @BParry_CIG
    I am loving the new particle lighting and shadowing brought into the new 2.5 patch, I cannot tell you how often I just walked around looking for places where my character shadow intersects with particles like in the .webm below
    https://gfycat.com/AgitatedShamelessIntermediateegret

    But at the same time I could not help but notice how a number of particle effects seemingly are only lit by ambient light (cube maps?):
    Here: https://gfycat.com/DistinctUntimelyAmethystinepython


    Or some that are lit by local lights / the sun yet cannot be shadowed, thus making them stick out like sore thumbs:
    Here (lit by sun, but not shadowed): https://gfycat.com/JovialSpecificEquestrian
    or Here (lit by local light, and player flashlight, but not shadowed): https://gfycat.com/BrokenFastAtlanticsharpnosepuffer

    My quesiton on top of this would be, is this a bug in the lighting system or is this as a result of it being artist-specified how they are lit / which particles can even be shadowed? If the latter, IMO - having it instead be a "no-matter-what-shadow" system that works regardless would be perhaps better to prevent random inconsitencies due to someone not checking a flag in editor.

    This is one for Ali really as he did the Particle Lighting stuff, but some of it, I believe, is to do with using Env Probes from the old lighting method and its selection can be not so great and also it's much more of an approximation. However the particle with no shadow at all, yeah that's one for Ali really. But Ben did reckon it could just be that it's a per Particle basis and that particle doesn't include the new lighting solution, however I was under the impression the new system affected all Particles. Again, Ali will have the answer! I forward it on to him, for when he's not busy....so expect a reply in 2035 or so. :P
  • geoffbirch

    Posts: 9

    Posted:
    Edited: by AwesomeAD
    Posted:
    Edited:
    [hide]

    @BParry_CIG Hi there,

    Came across something interesting and wondered whether you knew anything about it. I was admiring the helmet on the medium armor set, looking at the reflections of Olisar, and noticed that my ship wasn't in the reflection.

    Is this part of the whole Picture in Picture issue of rendering the scene again? Is the reflection just a "game memory" of what a blank Olisar looks like from that part of Crusader...?

    [hide]

    [hide]

    [hide]

    @BParry_CIG Hi there,
    Is this part of the whole Picture in Picture issue of rendering the scene again? Is the reflection just a "game memory" of what a blank Olisar looks like from that part of Crusader...?

    YRJjtGH.jpg
    BxFz462.jpg

    Let me answer this real quick; It's because of frustum culling, meaning all objects that the camera cannot directly see is culled from the view. Should everything be rendered at all times we would spend 60-70% of the GPU rendering things that shouldn't be visible. :)
    Hey there @SgtRasmadrak

    Thank you for the reply :) Ah so this is part of the selective rendering they talked about way back when! Makes a lot of sense..! At the end of the day, not have perfect reflections isn't going to ruin my experience haha

    Have a good one
    Well actually, culling really doesn't have anything to do with this really. So the reflections you see are environment probes, these are artist placed and they render our the scene from 6 different directions (cube map) when the artist places them.

    Here's a representation:

    Kzz4H.jpg

    Now, what you see in the reflections on the glass is a angle calculated look up of that environment probe. However when the artist baked out the env probe during level creation he/she did not have your ship there, why would they! So when we do the look up your ship is nowhere to be seen. Now in a perfect world instead of doing an angle calculation and look up into a environment cube map, we'd do a similar calculation but it would be a ray trace from the camera to the visor and off in whatever direction the angle determines and directly to the object we want to reflect. So why don't we do this? Well it's crazy expensive and it doesn't REALLY scale very easily. So it's not the fact that the geometry isn't there, because it has been culled, it's more the reflection calculation is super complicated and expensive. There are techniques to bring better and more accurate reflections into a scene, e.g. box projection, Screen-Space Reflections, planar projection and rendering the world upside-down to get reflections on the floor. But that's for another lesson :)
  • geoffbirch

    Posts: 9

    Posted:
    Posted:
    [hide]

    [hide]

    [hide]

    ...

    This is a good idea, I'll see if I can send this out to the DevOps guys in Texas, I'm sure they're up for the challenge. I mean, once we get a linux client for the engine sorted this will be required too right? They're probably already working on it tbh, it might even exist in dev only form atm, but you never know till you ask. Leave it with me.

    Linux client?

    Linux Client confirmed ! :p
    Yeah CR stated ages ago that he wants the game to be on Linux at some point and in theory that'll be a huge step closer once the Vulkan pipeline is finished and in place. As CR has stated before, we run our servers on Linux, and we have a build system in place for the programmers to verify good code which compiles the build on a multitude of different compilers and one or two of those are linux compatible, so it's not like we're going to have to rewrite some code for linux or compile under linux and find a multitude of bugs, we're in a constant state of readiness, just like a spider!!! Those devious little gits are always ready *shudders*.
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Are there any plans on doing a closed test specifially for linux in the vein of the current avocati? I personally would like to help out in that regard and I'm sure there are more backers waiting for such an opportunity.

    [hide]

    [hide]

    Are there any plans on doing a closed test specifially for linux in the vein of the current avocati? I personally would like to help out in that regard and I'm sure there are more backers waiting for such an opportunity.

    Like most people in LUG and/or the Unofficial Linux Thread. In case you are looking for possible participants at some point.
    [hide]

    [hide]

    Are there any plans on doing a closed test specifially for linux in the vein of the current avocati? I personally would like to help out in that regard and I'm sure there are more backers waiting for such an opportunity.

    Same here, SC is now the only application I still boot into Windows for. Anything I can do to bring more games to PC (not just Windows) I'd gladly do.
    [hide]

    I started a Thread for signing up for Beta-Testing on Linux in April 2014.
    https://forums.robertsspaceindustries.com/discussion/120415/where-can-i-signup-for-linux-alpha-and-beta-testing/p1

    By the way, for Linux only gamers there should be a possibility to upgrade their pledge, because without any Windows it does not make much sense to spend more Money in a Game, that you will possibly never play.

    Oh dear, what hath Geoff wrought?
    I'd expect a Linux client test is still a long way off. You'll probably see loads of stuff about getting the next-gen APIs up, running and performant even on Windows, some time before you need to worry about missing out on the Linux client testing.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    Wow, the backlog in here sure did mount up...
    [hide]

    i have a question. why can i dowload a game till i have 3.1mb left. and then my download speed goes so low it just stops the download completely

    Hi @AquaTurtle,
    This is outside my area, but it's probably because we use the BitTorrent protocol to spread patches, which means you're downloading chunks of patch from several different people at once. When you get into the last couple of chunks, you'll be copying them from individual people, so if one of them's gone super slow for some reason, it'll get stuck there until the downloader gives up on them completely and tries someone else.
    [hide]

    Hi Ben,

    Frequent flyer with another question. I thought of taking this to the issue council, but don't know where to even begin describing it. Lets just say, I was buzzing round in an Argo, I landed and got out to look out it, and then the game crashed with this CryEngine warning message:
    sXgJFPW.jpg

    Definitely log that through the normal channels, but weirdly enough that error's in @geoffbirch's code. Geoff, why are your warnings firing on release builds?
    [hide]

    Hi there. :)
    So I guess my question about decoupled mode was not suited for this discussion here.
    Could anyone point me to the right "ask a dev" section for decoupled and ESP questions please? :D

    Hi @RadiantFlux. I think this may be a question that you could fire at tech-designers, but it could also be that you're in the right thread and the appropriate dev just unfortunately isn't reading, sorry.
    [hide]

    Dark side of surfaces such as ships, stations, and asteroids, not facing the sun directly are too dark, there should be some bounced lighting reaching those surfaces because space is filled with lighted objects such as stars and planets which reflect the light from the stars!

    This scene was animated using Blender Cycle's which utilizes pathtracing rendering algorithm for accurate lighting. Space is filled with Stars from distant galaxies that light up the night sky. Even on the surfaces of the ships and asteroids not facing the sun, you can still see the surface because of bounced lighting reflections.

    0.jpg



    Here is a video from Star Citizen where the surfaces not facing the sun are pitch dark. This is very unrealistic lighting which resembles the type of Graphics DOOM 3 had.

    0.jpg


    Is there anyway to maybe bump up the bounced lighting a little so that dark surfaces not facing the direct sunlight will still show some detail?

    This is a pretty complex subject to unpack, and it's an argument with many zealots on many sides, so there's no definite answer to a lot of this.
    That said, some of your start points here are inaccurate. While it's true that distant stars shine on us from all directions, the amount of light that they contribute is very small - consider how dark it is outdoors, far away from a city, compared to how bright it is on a sunny day. The atmosphere blocks a little of both, but the ratio of brightness between them will be roughly the same. As an alternative example, I was recently in a very long and unproductive Facebook argument about whether the moon landings were real. One standard argument for their fakeness is "why are there no stars?" and the straightforward answer is "because it's day time", not only are stars not bright enough to noticeably illuminate the back side of a sunlit object, they're not even bright enough to be visible when you point a camera right at them!
    But then this takes us to the other side of the argument. No one wants a space game where all the stars are invisible, and most scifi fans have become accustomed to objects in space being lit with a cinematic key/rim/fill light rig. There's a strong push to throw realism aside and just do the same.
    But then there's another side to this argument that says, sure, key/fill/rim sounds great, but film directors have the convenience of moving those around during camera cuts. How are you supposed to put a rim light behind the most important object when the player keeps swinging their head around? If, instead, you just dial up the light contribution of the stars, everything starts to look flat as it does in the video above. Some people just have every scene include a big purple-orange nebula in opposition to the sun, but here we are letting the player fly anywhere around the sun and we can't put every star in the centre of a donut-nebula (it's a cheap trick anyway).

    Hope this has given you a taster of the endless circular backlighting-in-space argument :D
    We'll figure something out, some kind of compromise. If I ever make my own space sim, though, I swear I'm not setting it in space.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    ChangeYourName:
    [hide]

    Simple question here. Will 21:9 aspect ratios be supported? For more detailed conversations, see here:

    https://forums.robertsspaceindustries.com/discussion/344529/will-star-citizen-support-21-9-aspect-ratios#latest

    To sum it up:

    [hide]

    [hide]

    ChangeYourName:
    [hide]

    I was just watching my friend's screen, and at 3440x1440 it is still very much fish eyed. And yes, that fished view is normal in games that haven't supported their aspect ratios right. It looks great in games that have done it right, which is why I am asking if it will be done right here.

    Does anyone have anything concrete from the dev team? Or is everyone just spouting off the first thing that comes to mind, technical understanding be damned?

    Didn't you read what I wrote. I have a 3440x1440 monitor and 21:9 is implemented. It's not working correctly right now but if you reset it a couple times it will eventually display correctly WITHOUT any fisheye effect. I obviously would know what it's supposed to look like since I play other games. Not to mention it displays incorrectly more than half the time so I recognize it instantly either way in this particular game.

    EDIT: I don't recall any dev in particular saying "21:9" on some feature list but, there have been bugs on the issue council for ultrawide resolutions only, that were addressed; such as helmet field of view, how could that happen if it wasn't supported. If that's not enough for you, I don't know what to tell you man.
    Support for 3440x1440 resolution is there but not the 21:9 aspect ratio if that makes any sense. It's because the FOV math is not correct for 21:9, it's never been correct so the whole resetting thing made me lol. You can alter the FOV with a USER.cfg file but it just alters the vFOV and doesn't do anything for the hFOV (it's still very similar to 16:9 math). But in any case, true 21:9 is not supported until we get both a vFOV and hFOV slider or until they improve on the math specifically for 21:9 FOV. It's currently nonexistent. If you want to see an FPS game with proper ultrawide support, check out Doom. They implemented support for mixed aspect ratios, 4:3, 16:9, 16:10, 21:9 and a FOV slider that works in the aspect ratio you set. Really awesome feature and I hope SC does something like this. Would seriously make everyone happy.

    Honestly, I wish they'd come out and tell us that either they'll support 21:9 and give us a proper ultrawide FOV or no they won't because they fear it will affect FPS for competitive reasons. Sure would make hardware decisions a lot easier for people that mostly plan on playing SC. As an ultrawide user I hope they do it, I'd hate to dump my monitor because of some stupid arbitrary competitive balance reason. If that's the reason they better nix VR support because of the unbalance that brings, especially if they allow leaning with it. But for the love of god, please don't go the overwatch route and give us a cropped 16:9 image for us ultrawide users.
    Hi there @ChangeYourName,
    Ah, aspect ratios, my favourite topic after 32-vs-64-bit and whether the moon landings really happened!
    The short answer to this, I guess, is that I'm pretty certain that the camera projection we're using is already correct for 21:9 aspect ratio. The reason we only expose the vFOV is that there's much more variance in monitor widths than there are in heights, and for any given vFOV and aspect it's pretty easy to calculate a working hFOV.
    Some caveats though: people with wide monitors often complain of the image looking pinched in the middle. As discussed somewhere earlier in this thread, what actually seems to be happening is that the outer edges are stretched by the projection, but that people have some natural tendency to sit further back from a wide monitor, or that whatever internal correction the brain uses to handle weird monitor FOV just doesn't work when the edges are there.
    Whatever the cause, you can confirm that the projection works by setting the vFOV to the exact angle that your monitor occupies in your visual field (or if you prefer, scooting your chair forward until it matches). The stretched image at the edge of the monitor is now foreshortened by your viewing angle on it, and should be correct.

    A few of things I've not taken into account here though...
    1) I imagine that true 21:9 support falls outside my expertise, and includes things like using the additional screen space for meaningful information, making HUDs less cramped, etc. That's way outside my department, and I've no idea how much other departments intend to invest in that kind of thing.
    2) Curved monitors. I've never had to deal with a curved monitor, I don't know what best practices are for them. You can't fix the camera maths to do it, so you'd have to add some kind of post-process distortion, probably like what VR headsets do. I think some drivers already offer to do that for you, scares the willies out of me.
    3) Separate monitors in a surround config. Theoretically we could do these right, we'd need to draw each one with its own camera and have some sort of GUI to let the player describe all the sizes, distances and angles. At the moment we'll just be assuming it's flat though, and any surround-ness will just be showing you the wrong thing.


    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]

    I want to add a little more to it:

    Right now, the Khronos Group is not only actively searching for Feedback on studios which do implement Vulkan as an API, but time is not stopping an they are also working on the Vulkan 2.0 - called Vulkan next, which also brings a lot to the table, esp. if you look at point like cross-process-sharing and multi-gpu-stuff (vendor agnostic). It all sounds a little bit more in the direction of the thing AMD wanted to have with its Hetegorenouse System Architecture, but unified.

    (The people deeper into that tech could probably talk hours about it, so I'll just stop at that glimpse)

    The question is now:
    As we know now, that Vulkan next is in development, do you guys plan on "early adopting" (or better said, making the code-base ready for quick adoption) Vulkan next so that once its out you can "easily" implement it into the existing code-base, or is the plan rather "We take what we have now, start to implement it and whatever comes in the future is another topic".

    I know I'm talking double-future here, as Vulkan isn't even implemented yet - but my question goes into the direction "We know, that Vulkan next is capable of this and that and we need the code-base to be designed like "x" to be compatible so we already prepare for it"

    -----------
    Side not for the Devs:
    You may wonder, why we, the players are so much interesed in that kind of topic, even thou its pure code and probably the most boring aspect for a gamer to look at. The thing is: Its not. We love tech and we love stuff being on bleeding edge, like the game itself. We ride with you along the way of progress and are not only interested in the result, but also in the making of it, as its damn interesting. We may not all be the hardcore code-gurus, but things like Bugsmashers has a pretty solid followership. (also a lot of us are coming from more tech-oriented fields)

    This interest gets boosted by the craving of finally to get rid of old socks (like Windows) in order to finally get fresh ones. Vulkan supports many frontlines on that part: Taking away API-overheads, taking away magical driver optimizations for games (where suddenly raw power matters again instead of not having a bottleneck), taking away dependency and thus a rat-tail of stuff we don't want to have but have to use because of political decisions everyone hates etc.

    Hi @Valdore,
    This is interesting information, not something I was at all aware of. I don't know what the plans are in this area but we should definitely be keeping an eye on what Vulkan 2 wants us to do.
    [hide]

    Hi Devs!

    First, awsome work you guys are doing, cudos!

    Currently im trying to add some StarCitizen ships into my own little CryEngine project.
    While trying to unpacking the .pak files in StarCitizen\Data folder with 7-zip, I noticed a few files got a warning "Corrupt head".

    The files that gave an error while unpacking are:
    objects-part0.pak, objects-part5.pak, objects-part8.pak, scripts.pak
    (I made sure all my StarCitizen files are exactt like they are after a fresh download from the sc launcher.)

    I was wondering if these corrupt heads of some of those .pak files could be resulting in instability of the gameclient. If those errors are no problem and not causing any instability or issue then please don't mind me :)

    Thanks!

    Edit: Oh ps. I forgot to ask this. When im trying to load objects/images of StarCitizen into CryEngine scene I get the following error:
    CGF Loading Failed: Index stream 8 has damaged data (elemSize:1900548) [File=objects/animals/fish/fish_clean_prop_animal_01.cgf]
    Do you perhaps know what this is and how it can be fixed? Thanks!!

    Hi @UINS_InzaOnoa,
    If I had to guess (and I do) then at least the second problem will be because we've added or removed something compared to standard .cgf, so the formats have probably become incompatible. I don't know if a similar thing has happened with your 7zip warning, perhaps we've customised the header data? If the same files are all reliably complaining after an update, but still opening, I doubt that it's a corruption issue.
    [hide]

    How is SLI/CROSSFIRE/Explicit-Multi-Adapter going?

    Hi @zerkerz, deep down in the backlog I'm afraid. We maintain some SLI and Crossfire machines as standard workstations within the render team, so as to see any problems we cause immediately, but fine-tuning for them needs to come after fine-tuning single-adapted, and something like explicit multi-adapter would have to come after the New APIs get here.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hi Dev team!

    I've been following development for a couple of years now with a lot of interest and I've played PC games enough years (decades...) to really appreciate what RSI is accomplishing here...thank you for your great work! My question, though, approaches infrastructure and bandwidth limitations to what is *really possible* within game. I am giddy to hear about procedural planets, the ability for players to eventually build bases on planets, physical (rendered) cargo boxes, modular ship models, modular and customizable character clothing etc. But at a certain point something must limit the level of detail and modularity in game: either a polygon count, or a limitation on the amount of information that must pass between client and server, or maybe actual server storage space will be an issue when the game has to keep track of too much information...? Can you please say a word about what the most practical limitations are and how these limitations will affect what is ultimately achievable in game?

    Thank you so much!
    -Hallkel

    This is a big question @Hallkel!
    I think the biggest immediate limit is the network limit on the number of players within a small space. Things like poly count are a problem, but there are always options for us more and more aggressively cutting detail and cramming stuff in. A room with a hundred characters might not look great, but we could probably get it to run. Network wise, there are tricks that can be applied where you reduce the rate of updates on less-important (eg distant) players, but if they're all rammed in a small space, fighting one another, that's a bucketload of data no matter what you do.
    For all I know the network team has all sorts of things up their sleeves to improve that limit, but I imagine that in the end that'll still be our ceiling.
    [hide]

    Hey Ben,

    So I am assuming that you guys are trying to solve some issues surrounding physics grids between ships that are "docked" sort to speak.

    After reading up on the new CryEngine 5 it seems like it can solve most of these problems natively without having to implement any extra "fixes".

    Now I am from the outside looking in, but has anyone taken a look at what it would take to migrate from CryEngine 3 to CryEngine 5 to solve some of these physics issues (and maybe there are other beneficial things that I missed with my analysis)??

    Or maybe someone has already looked at it and for X reasons its not a good idea. Just curious ;)

    [hide]

    Hi,

    Will SC's development be moving to using Cryengine 5.3 and 5.4 when they are released, or has the SC's heavily modified engine already moved too far away from Cryengine for the changes in 5.3 and 5.4 to be meaningful?

    Hi guys,
    I don't actually know what the licensing situation is, whether we ended up as CryEngi5eneers, Lumberyardsmen, etc or what (my head's been down working on something that's definitely not in either), and I can't speak to what other teams might want to merge from other editions of the engine.
    I'm pleased to see that CryEngine 5's moved to Github though - one of the big problems we had in the past with integrating changes from Standard CryEngine was that we'd receive a single, massive dump of the new engine, and (in part due to CryTek's architectural decisions) it was very difficult to distinguish which changes were related to a feature we wanted, which were unwanted features, typos, newbie coders going feral, etc.
    I don't want to downplay the advantages of being able to see how other people solved a problem in a very similar codebase though. Even though we didn't integrate 3.8, we've frequently compared its changes to what we're about to do to a given feature, then [agreed/disagreed/copied the code] accordingly.
    I doubt that we'd be "moving to CryEngine 5" though. I'm not even sure what that would mean at this point, perhaps we'd eventually take enough pieces that it was more 5 than 3? But it'd have passed through our hands and been put in differently, so it'd be hard to tell which of those changes were cut/paste/rename and which were read/inspire anyway.
    I guess I've taken a really long paragraph to say I don't know the answer, and I don't even know that there can be an answer.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    LiveOps/DevOps
    In August, the LiveOps/DevOps team deployed 11 builds to PTU and published 2.5.0 to Live. Since we also publish a lot of builds internally, every minute we shave off a build or build replication time makes a difference. Here are a few fun stats that demonstrate the size and scope of information that we’re dealing with —

    August Build Stats
    161 builds completed successfully
    Depending on branch and type, build sizes varied from 40-200 GB each
    7,728 GB of build data was generated
    Users deployed 78 servers across all build versions
    30,912 GB of build data was replicated between studios
    We transfer nearly 1,000 build copies a day to the desktop level across all studios with most testers and developers consuming multiple build versions per day
    The central build system is currently made up of 48 servers
    Currently configured with 524 cores and 812 GB of RAM with access to 400 additional cores during heavy build activity

    Hi! These numbers is impressive!

    I'm wondering... What build/deploy system do you use? I can believe that you use something like "Jenkins" for such purposes.

    Regards,

    PS: Sorry if you have answered on this question. Searching is not my strong side.
    Hi @Karfagen, I think we're Buildbot at this point.
    [hide]

    Would CIG consider developing procedural LOD technology for everything rather than manually creating LOD's for things? Not happy at all with how LOD is looking for Super Hornet for example. There must be something that can be developed to automate the process surely that you can look into?

    Hi @Azaral,
    Anything's possible, but I doubt we'd do that. Firstly because procedural LOD is a massive R&D undertaking, but secondly because some of our worst LODs already came out of an auto-LOD system!
    It would be very impolite to blame our worst results on the LOD software that was used, since we later discovered a configuration mistake that meant rather than saying "LOD this as much as is reasonable" we'd said "Hit this arbitrary poly count no matter the cost". Even so, automatic systems don't seem great at understanding what details are most/least important to keep the model looking the same at a distance. It's much more likely that we'll see new ships having better hand-made LODs for a while, then the ship team cycling back around for a final improvement pass on the earlier assets once we're sure all the tech's locked down.
    [hide]

    Heyo!
    Given how CitizenCon is coming up and there will be a presumably polished showing of SQ42 and its cutscenes, are there any plans to clean up a lot of per-object motion blur errors that occur in the current code? I only ask because I honestly think it as an effect is something that greatly enhances the visuals of a cinematic presentation as well as being a general crowd pleaser for for casual viewing and 30hz streams. Its inconsistency of application at the moment ingame makes the game look much jerkier than it ought to.

    It is no surprise IMO, that the last half decade of games that won the Siggraph Real Time Graphics award all had cinematics with a wonderful use of the filmic usage of the effect. ;)
    Best,

    Well I.... oh, look at the date. I guess that's a no. Er, next question...
    [hide]

    [hide]

    [hide]

    [hide]

    [hide]

    ...

    This is a good idea, I'll see if I can send this out to the DevOps guys in Texas, I'm sure they're up for the challenge. I mean, once we get a linux client for the engine sorted this will be required too right? They're probably already working on it tbh, it might even exist in dev only form atm, but you never know till you ask. Leave it with me.

    Linux client?

    Linux Client confirmed ! :p
    Yeah CR stated ages ago that he wants the game to be on Linux at some point and in theory that'll be a huge step closer once the Vulkan pipeline is finished and in place. As CR has stated before, we run our servers on Linux, and we have a build system in place for the programmers to verify good code which compiles the build on a multitude of different compilers and one or two of those are linux compatible, so it's not like we're going to have to rewrite some code for linux or compile under linux and find a multitude of bugs, we're in a constant state of readiness, just like a spider!!! Those devious little gits are always ready *shudders*.

    So does this mean you guys have decided on Vulkan over DX12? or are you gonna try and do both?

    I know consoles has been touched on before, and you guys have said it's not powerful enough, but the new xbox one scorpio is going to be faster than the majority of gaming pc's out today (around GTX 970 perf with intel i5 if zen cores are used from amd). any reconsideration's on console porting with the new, more powerful hardware?
    I don't think there's been an official decision on which API to go for, but they have a lot in common. Saying "Vulkan" is certainly less syllables so it tends to get thrown around more in discussion.
    Consoles-wise, that's a massive time/tech investment. I'd love it because it would give us access to the beloved PiX performance tool, but I guess that's not enough reason to build a port by itself...

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    0.jpg


    This is what is available for Unity for LOD - Not sure how this compares to Crytek LOD system?

    I believe Unity's auto-LOD is an integrated version of Simplygon, which is the same tool we use. Mostly we've found that it tends to be better at dealing with curvy organic shapes than it does with hard edges and bevels.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    2) Curved monitors. I've never had to deal with a curved monitor, I don't know what best practices are for them. You can't fix the camera maths to do it, so you'd have to add some kind of post-process distortion, probably like what VR headsets do.

    I have a sligthly curved 21:9 monitor and I can promise that no special processing is required apart from correct hFOV. If I sit at an optimal distance from my monitor, the edges and the center are an equal distance from my eyes and things just look normal/natural.
    I'll have to look further into what people do with hFOV, but I can guarantee that when you have a curved monitor, something screwy has to be happening. The perspective projections we (and basically everyone else who isn't doing raytracing) use are designed around the idea of projecting all the geometry to a flat plane.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Will there be a performance pass soon-ish? Game is unplayable even lowest possible settings on mid grade modern gaming laptops that handle other current games with ease in 1080p.

    I certainly hope so. One of the things that really needs doing is making the lower settings provide better performance improvements. Currently it sets some limits on some things, but it's quite arbitrary and we've not proven that those features are what cost the performance in the first place.
    [hide]

    Hey there,

    Got a question about the optimization of the game in the future. I'm currently running the game on a rather powerful laptop, and all-in-all it seems to handle the job fine. However I notice that the processor (Disk) usage occasionally spikes to levels which make the game stutter or, on occasion, freeze for short periods. (Particularly in the Persistent Universe as I've had little to no trouble with Arena Commander).

    Is optimizing how the game runs/managing data usage going to be a large part of future updates, or are we looking at something that is just going to continue to bloat in that regard?

    I couldn't say what's causing your issues, but random massive disk spikes don't sound like an intended feature.
    [hide]

    Not sure where to ask, so I'll put it here...

    I'm looking for an Open-Source software project to contribute to, and I thought it would be cool to contribute to one that CIG makes use of. I suspect I'm far from the only backer with software dev experience that would be willing to "help out indirectly" via Open-Source projects.

    What OSS does CIG make use of behind the scenes? Any suggestions on projects/features/bugs that might be useful to have more individual contributor attention on?

    The two that I know of would be Buildbot and RenderDoc. Both of them excellent open source software. I don't know if either of them are hurting for new contributors, but that's certainly where I'd suggest looking.
    [hide]

    Hello, a question regarding the BUILD of Star Citizen and Squadron 42

    I would simply like to know if the "Squadron 42" final build will always "benefit" with time of the "Star Citizen" build changes, or if at the moment Squadron 42 is released, it will more "stay in stone" the way it was released.

    Concrete examples are = new ships iterations in Sq42, new "Odin" system reworked with time for SC (and thus Sq42) ..etc.

    In short = Will that "automatically" be integrated in Squadron 42, or do you have to do that manually each time after release (like a specific patch for Squadron 42)

    Thank you !

    I expect that when Squadron comes out, it'll do so from a separate branch that won't automatically receive integrations from the PU. While many benefits might be portable, the single player campaign needs to be a locked-down design. Imagine we improved the turrets on a ship that turns up as an antagonist in the story - we might suddenly have made a mission unwinnable or extremely hard.
    [hide]

    Hello @BParry_CIG and @geoffbirch !
    I just saw online today a really interesting presentation about how the team @ Trecent updated the IBL to support a dynamic update in CryEngine, as well as the SSR to support the latest Frostbite style of Stochiastic stuff. Pretty amazing difference in the scenes they showcase:
    ssr_mixed_surface9aor5.png
    default_illumbepgh.png
    mhol_illumh2otr.png
    default_water92qtc.png
    mhol_water0wrs6.png
    Here is a link to the presentation:NlCgy4osDgxZbehttps://www.slideshare.net/secret/NlCgy4osDgxZbe
    The Start talking about SSR starts at slide 42, and Illum shader SSR starts at 64. Interstingly they made two different SSRs for water and for the illum shader because they wanted water to have SSR on at all times, even for low spec. So the water SSR is cheaper and does not account for some of the more interesting phenomena that the illum shader version does.

    You know how a few posts ago I said that it's very helpful to see how CryTek changed a feature, so we can see if we want to do something similar? Seeing how a totally different studio modded the engine is even better. "This stuff looks hot. We should do it." is basically what I said to Ali when Geoff opened up the screenshots.
    [hide]

    You know, when they say "The camera adds 10 pounds" they’re not kidding.
    Here’s the effect with different camera lenses while keeping the subject the same size.

    tumblr_oaxpwqGXEi1t98fjvo1_1280.gif

    So, what virtual focal length does Star Citizen use - and can we maybe change it, if the people around seem too big?

    Game cameras tend not to use focal length to control this, as it's a very photography-specific concept. However, changing the focal length on a camera changes its FOV, and so you'll see similar big-nose distortions of characters if you set a really wide FOV and stand too close to them. Setting a very narrow FOV and standing far from the character will give you the effect of a very high focal length.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hi,

    i have a question regarding FOVs. It may be a specific one

    Anyway, thanks

    Question: I am using 2 projectors each 1920*1080 and hence getting 32:10. So what does the FOV become on this setup. Answer will lead me to the "what should the curvature be or placing geometry of the screens. (or where should ıi see the edges of screen in my eyesight?)

    Hi @KAVALCI,
    If you want to exactly match the game's projection, the rules are simple - don't curve anything, and sit where the games vertical FOV matches your real one (eg for a 60 degree FOV, make sure the top edge of the screen is about 30 degrees up).
    In your example, assuming a FOV of 60 degrees (1.047 radians) and an aspect ratio of 32:10, the formula for your horizontal FOV is:
    2 * atan( aspect * tan(fovRadians * 0.5)) =
    2 * atan( (32/10) * tan(1.047 * 0.5)) =
    2.149 radians, or about 123 degrees.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hey,

    i just wanna ask if there will be a better look from that armor suites when using 21:9 display later in the development. Because, now it looks like:

    Rg3TwcZ.jpg

    And feels like:
    xQcjgyw.jpg

    :D

    Hi @ElEC, I'm afraid that's probably a decision for art or design. I expect you'll be unsurprised to hear that there's no technological limitation preventing it :)

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]

    You know how a few posts ago I said that it's very helpful to see how CryTek changed a feature, so we can see if we want to do something similar? Seeing how a totally different studio modded the engine is even better. "This stuff looks hot. We should do it." is basically what I said to Ali when Geoff opened up the screenshots.

    Yeah it definitely is "hot". I really think it will increase the tactility and realism of all the metal surfaces in the game, especially in those large areas like hangar decks or on industrial planets like Arc Corp.
    One thing I do take note of is how low resolution their implementation is, so I imagine it is a bit poppy and aliased in motion? Hard to know since I have yet to download the game. Yet they do mention about adding in ultra mode for higher internal resolution to the SSR, which probably helps that out a lot at the cost of performance. Nothing wrong with options though!

    Speaking of indirect lighting...
    CitCon happened!
    gif4o0s.png

    Areas like this had me wondering, have you guys or the Frankfurt planet team started thinking about Planetside or indoor GI on some level? Obviously the planet looks fantastic ATM, but large shadowed areas seem to lack large scale occlussion as well as bounced light (like from local rocks that happen to have sun on the sun on them) giving them a single toned ambient grey look, lacking depth clues to a certain degree.

    Since that screen shot has a gun on screen I am curious: I know the Gamescom demo showed off how the near FOV depth of Field is in helmets and on the rifle stocks and arms, but does it only apply if you have a helmet on? Currently in 2.5 it only turns on if you have helmet on oddly enough, and In the citcon gameplay demo the character ran around without a helmet and it seemed to have no near FOV depth of field on the weapon stock. IMO, keeping them consistent (i.e. having near FOV depth of field on at all times regradless of helmet or not) would probably be recommendable so that you do not have the moment where all of sudden depth of field pops up in a gamey kinda way where it was not before.

    Best wishes to you and great work on the planetary demo!
    The indirect lighting you see there is currently just from the sky, it's true. It's an efficient effect to use over massive distances, but doesn't capture the landscape details or handle specular highlights particularly well. We've got work in progress, though, that should handle things in more detail around the player or key landmarks, so it should be less noticeable.
    The videos in that Monster Hunter presentation didn't look too poppy overall, I assume due to the temporal reconstruction, but I guess you can never know if they picked the most agreeable scenes to demonstrate it.
    DoF levels are controlled by scripts from the tech design team, so I don't actually know what it is that changes them or what their full plan is there.
    Programmer - Graphics Team
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hey Fellas! After watching the livestream I was wondering if you thought about releasing the planet creation tools to the backers? I just got back playing xcom 2 and was really surprised by the quality and quantity of the thousands of mods out there which brought up the thought of what might happen within the SC community with tools like this...If the community would create assests for SC it would have a lot of potential especially in the quantity department since we need a lot of assets for the final game + we would have something to do while waiting for the final game and your workload would be relieved if the best of the best planets could could come from the SC community.
    Just wanted to know if this idea has any merrit to it.
    Regards and keep on blowing us away :-)

    Hey, releasing some tools to the community to create assets / content is something I have already mentioned we should be looking at :) The main problem is, the build you guys are playing is the so called "Release" builds. Meaning such builds are stripped out of all source assets, data is encrypted, development tools are stripped out, etc. etc.. For modding, backers would need to use some of our development tools that are working only with Development builds - the builds we are using internally for development - that would effectively mean maintaining and updating separate branches / configurations for Mod Tools, and this would be a time consuming process - first of all it won't be possible to give out entire development builds, so it would require a dedicated person to work on it to prepare and release an initial set of modding tools to the community that are compatible with the official builds etc. etc. and at the moment we don't have any spare resources to look into that. So here is my suggestion...

    1) We need to be sure that there is effectively a high demand from the community to utilize such modding tools
    2) If point nr.1 is true and someone from the community is familiar with C++, scripting, and CryEngine modding, please apply to our Frankfurt office to potentially work on that...

    Cheers,
    Marco


  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]



    Do you plan on changing the 'look' and feel of the rovers wheels and ships landing gear to better reflect the actual planet terrain?

    I.E. Right now the rover looks like it is floating on the ground rather than driving on it.. There is no depth to the wheels vs the ground. What I mean is the weight and mass of the Rover should have the tires sinking into the ground a certain depth depending on the actual terrain.

    So I think I know what you are referring to, sometimes it looks like the tires are not sinking into the terrain because of the parallax mapping displacement - the small details like grains / dirt you see everywhere on the terrain are rendered through that per-pixel displacement. This results in the average ground depth being about (depending on the content setup) roughly 5 centimeters "inside" the physics mesh which is used for wheels contact. So I was planning to move the planet physics mesh terrain generation a bit inward, which will effectively make the tires and players feet intersecting with the ground level terrain details, since such ground details modify the depth output. That is on my TODO list to work on it when time permits...

    Marco
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Will "star engine" take use of unified memory?
    That it uses memory of the GPU as both GPU memory and system memory.

    Is there any point in doing unified memory with games? Only heard they doing it in supercomputers so I just wanted to ask.

    Hi @zerkerz,
    This tends not to be done since the majority of PC systems have an architecture with a bandwidth bottleneck between the GPU and CPU. It's a lot less restrictive than it once was, and GPU drivers do now overflow GPU memory into main memory if they run out of space, but it's unusual for anything to get shifted the other way.
    Programmer - Graphics Team
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Wow, since even Marco Corbetta is answering questions, are there any plans to procedurally generate urban areas and/or cities? Obviously these don't have to be very interactive/explorable, but it would make sense on some more densely populated planets that might need cities for fly-overs.

    I remember seeing a demo I thought was very cool a long time ago that did pretty much this which Corbetta happened to work on as well; might be fun to see again:


    Wow, that's a very old video! But yeah I think there were some nice tech features in there. I still need to consider if/what ideas from that demo could be applied in the context of Star Citizen planets...

    Marco

  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hello again! As we are already talking about the fancy-GPU stuff here, what about GPGPU-processing? Can we already take advantage of it in the current engine and/or is this rather a thing for the later Vulkan implementation and can we use it to solve current bottlenecks?

    I'm thinking about the netcode here, where we have to stream millions (?) of parameters for every frame that clogs up the server right now. Beside optimizing the netcode itself, is it possible for the GPU to handle that kind of processing? A GPU has several thousand shader cores which only have to do simple tasks (updating the state of an object) or is the bottleneck at another point?

    Hi again @Valdore,
    While GPGPU is a pretty useful technique in some cases, it tends to be pretty niche. The GPU has a lot of cores, but it works best when it has to do very predictable, non-branching work on a large dataset that it has easy access to. Networking would be pretty terrible for this, since the information arrives serially rather than in parallel, the information for different systems wouldn't be able to be easily handled in the same shader, and the work needed to get that data up onto the GPU would probably be more expensive than the work that needed doing on it in the first place.
    I'm trying to think of something that would be a good fit for GPGPU, but the only thing that springs to mind is physics updates on large numbers of simple objects. This isn't surprising since GPGPU physics is what PhysX already is.
    [hide]

    Question @BParry_CIG - Will Star Citizen (Star Engine) use Dx12 multi-GPU ? Just wondering, because i'm thinking of selling my old GTX 970 - am i better holding on to it, to potentially use in future in mgpu set-up with a rx480?

    Hi @Kaiser_Solo, it's hard to say how well we'll support heterogeneous multi-GPU until we have a chance to try it. I can imagine bad results if we stick with conventional alternate-frame rendering as our multi-GPU approach, since you'd get alternating slow and fast frames, and AFR would be the path of least resistance when it comes to multi-GPU.
    [hide]

    Have you guys thought about having slight HDR overexposure/imperfection?

    I think quite a few games make the mistake of making HDR too perfect, to the point where everything, even very bright lights, don't look even slightly overexposed.

    I've been playing this new game called Paragon recently, powered by UE4, and the thing that struck me the most is how life-like it looked.
    The game has top-notch adaptation, which doesn't try to correct lighting to perfection. This leads to some very convicing results.
    PKp0jmG.jpg

    This is quite subtle, but notice how the glowing letters look more white than blue, even though they are emitting blue light.
    This gives the impression of HDR actually struggling to balance bright lighting, which I think comes off as very realistic, because it's simulating HDR adaptation in real life not being 100% effective.

    I think this is something that Star Citizen misses right now. When I look at, let's say, laser shots, they don't give the impression of being these glowing, bright, energy filled things that coud almost blind your vision. They look rather like very dim and boring red sticks, just to make an example.

    Having some HDR imperfection could lead to all lightning looking more life-like, from the brightest ones to even the very dim ones like cockpit lights. What do you think?

    Hi there @Fushko,
    I'd really be glad if the main worry about our exposure calculation was that it's too perfect :D
    The recent changes to bloom/glare have all been pushing towards this kind of more subtle handling of the transition from slightly over-bright to seriously eye-searing brightness, and there's definitely more room to get the exposure calculation better.
    I do know there's a discussion going on at the moment about how to avoid over-adapting to what's on screen at any given moment (eg not brightening a dark grey carpet just because the player happened to look down for a few seconds), but the issues with the laser bolts are probably less to do with scene exposure settings and more to do with them needing some love in the rendering department. The gentleman sat opposite me has been looking at a whole bunch of the things that need doing with them, so hopefully you'll see some improvement there soon.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I feel like this is a programming function. Are there plans to make scopes work more like real life? I realized in the 3 oh demo that the scope zoomed the whole screen, rather than just the scope. With how deep you guys are going on details, will this change in the game?

    Hi @DeltaForce9559,
    Making scopes zoom independently from the rest of the view would be a classy feature. Making it happen would basically require the same multi-view tech as we already need if we want to do video comms, or other picture-in-picture stuff, so we're likely to have the necessary underlying feature set.
    The trouble we'd run into when doing it, though, is that a naive implementation would end up paying for a lot of stuff to be drawn in the middle areas of the screen, and then paying again for the stuff being drawn in the scope. There would also be trouble with LoD streaming choices (scope zoom is giving us a big headache as it is) since you'd need near objects to be at a high quality over a wide field of view, but also far objects within a narrow field at the same time. Not insurmountable, but definitely a challenge to get right.
    [hide]

    So, yesterday we had full moon at where i live. When looking at it and outside, i noticed how light it was compared to when there is no full moon. So i questioned myself, will this be the case in Starcitizen, will large bodies in space actually bounce of light?

    Say like the moon does when its full moon at a place on earth

    Hey @SpoofGhost,
    There's two ways we could do this, and annoyingly one way is better at full moon, but wrong the rest of the time.
    So, our engine supports nice-looking spherical lights. If we wanted to do moonshine (not to mention earthshine) from a full-moon, the best quality we could get is by placing a light source of appropriate size and brightness right where the moon is, and just letting it be a light. Problem is, any kind of half-moon or crescent can't be represented that way, so the best we'd get is a half-moon with a reflection of a half-brightness full-moon. Not so cool.
    The other way we could do it is just to make sure the brightness of the sun is correct (at least as it's applied to the moon), and capture that in all our environment probe images. We're currently working to extend environment probes to do live captures wherever you go in the world, so the moon would be able to illuminate you at the passable quality that other reflections can. This'll look good, but sadly won't ever get you the long sparkly "moon over the ocean" effect when you're near an ocean.
    [hide]

    [hide]


    In my opinion HEAD TRACKER + JOYSTICK this is a very cool !!!

    Back in 2.0 or so, there was a TrackIR support for the game, i would really wanna know why it's not working anymore. It was such a great experience on a 34" curved! So please tell us Ben... why it's actually not usable? And the other question is of course: when will it be. :P
    Sorry, @EIEC, peripherals aren't anywhere near my assigned area.
    [hide]

    Can CIG release the section of the game used in the CitizenCon demo so people can get an understanding of what hardware requirements. Just the section to the point the game went into free flight with the constellation maybe? I think some kind of time demo with a frame counter would be very helpful to get a feel for how the game runs on various system.

    Cheers.

    Hi, @Polecat. That would be really cool, actually. I'm not sure it would give you a representative picture of the hardware requirements, since there's a lot of stuff in the game that's not in the video, but it would still be cool.
    [hide]

    Watching the live-stream, I wondered about intense weather like Sandstorms - is there the possibility within Star Engine to prescribe 'values' to those weather events that effect the character or their vehicle? For example, could you...'tell' a vehicle like the dragonfly to become less responsive as it's buffeted by the winds? Similarly could a character's movement be effected by dynamic weather like the Sandstorm?

    It seemed last night, obviously because this is all quite unrefined (I imagine), that there wasn't too much of an impact from the storm on the player.

    I also wondered whether weather (haha) would persist on the planet. By that, I mean, could the sandstorm carry onwards and be experienced by a player 5 km away?

    EDIT: Perhaps more of a design question, but would AI feed into the behaviour of animals like the sandworm? Could we actually be hunted?

    Hi @Acheronn,
    Somehow missed these questions up above.
    1) I think there's a thing quite like that in Squadron42, not a sandstorm, but environmental effects on your systems.
    2) There's definitely an intention that any weather effects, clouds etc are consistent between clients. Whether the weather systems will move isn't clear to me, but there's no specific technical reason that they couldn't.
    3) No idea about AI, sorry.
    [hide]

    @BPerry_CIG since we're on the topic of shadows. How does the centralized sun light effect the dark side of a planet? Will it just be lit by atmospheric light or will light bounce from moons and such? Or something entirely different?

    Hi @ArmoredCitizen, yeah, I guess it'll have to just be sky-glow from the horizon, moonlight, starlight, headlamps, etc.

    [hide]

    To add to questions about the CitCon demo:
    - The skies on the planets looked magnificent! That Light Scattering really works great! But, since we Earthlings are pretty new to the concept of having multiple stars: HOW does light scattering work when you have 2 stars in the sky? Is it additive, multiplicative? CAn you maybe show some examples to salivate upon ;)
    - The lights of the Ursa Rover shining on the ground didn't seem to have the correct colorization. All terrain seemed to become white when shone upon. No matter which terrain it was. Even though the daylight should have kept its main colorization. Simply speaking: yellow sand shouldn't turn white when shone upon with headlights during daytime.

    Just my 2 cents! THe demo looked really awesome. That crimsom red evening sky on the sulfur planet/moon... WOW! Just WOW!

    Hi @Voyager_NL,
    This is actually something I'm pretty opinionated on - I think we shouldn't have two suns in the game, full stop. As far as I know, the light scattering could work out OK (I might be wrong on this, but I think it could be added together correctly), but all the additional pain that it brings is just terrible. A surprising amount of our frame time and offscreen graphics memory is spent on sun shadows, so allowing there to be two of these huge, dominant lights at once essentially means you have to have everything in the game drop shadow quality massively the moment they're more than a few degrees apart.
    Obviously everyone wants two suns, thank Lucas for that one I guess, but it'd be a shame if instead of having "planet of the breathtaking double sunset" we had "planet of the everything looks sort of bad".
    Programmer - Graphics Team
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I apologize if this has been addressed elsewhere, but I am curious: When you visit a planet in Star Citizen, how much detail will procedurally re-generate the next time you visit? That is, will the areas left untouched by level designers re-generate exactly as they were before, or will different terrain be generated each time?

    Becoming familiar with an area after many trips overhead would make anywhere on a planet begin gradually to stand out as its own unique place, even in the wilderness where one valley is no more significant than the next in terms of content...not to mention it would really open up the possibilities for tactics.

    I imagine myself setting up a secluded, carefully concealed stash and/or hide-out way out in the middle of nowhere that I can go back to again and again - even on foot, if I am near enough to get my bearings.

    Still...the Procedural Planets v2.0 demo is spectacular no matter what, and I'm giddy with excitement about the prospect of descending from space and exploring a new planet!

    Hi Ardem,
    Yes, as long as the planet parameters and setup are the same, it will re-generate exactly the same environment each time you visit.

    Marco

  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Outerra has a fully working procedural generated planet called Earth (or Terra if you like), and it looks very similar to the SC own planet tech.
    How similar are they in the details and framerate and techniques.

    Would love to hear details from CIG about it :)

    Hey, that Outerra video looks very nice! To answer your question, on their webpage they say that the planet is not generated as you mentioned, but is actually streaming in real world information from an Earth dataset (from their webpage: "15GB for fully downloaded Earth dataset") and it is then using fractal refinement to fill in the details.

    So to answer your question: there are visual similarities but I think they are actually very different in their scope and goals, I think the Outerra team has been working on recreating a model of Earth as close as possible, while the SC planet tech is more focused towards supporting more customized/art driven planets creation as shown during CitizenCon, multiple planets at the same time including moons, alien planets and some planets that are not even completely terrain based. We could actually already feed in to the SC planet tech some very low resolution Earth elevation data (very low resolution compared to the Outerra dataset) but has not been our focus and we have not been working on that at the moment. The atmospherics models might look similar if both are using an accurate model of light transport with multiple scattering. In sum, there are a few games supporting procedural planets, and I think they are all doing a very good job :)

  • geoffbirch

    Posts: 9

    Posted:
    Posted:
    [hide]

    [hide]


    Hi again @Valdore,
    While GPGPU is a pretty useful technique in some cases, it tends to be pretty niche. The GPU has a lot of cores, but it works best when it has to do very predictable, non-branching work on a large dataset that it has easy access to. Networking would be pretty terrible for this, since the information arrives serially rather than in parallel, the information for different systems wouldn't be able to be easily handled in the same shader, and the work needed to get that data up onto the GPU would probably be more expensive than the work that needed doing on it in the first place.
    I'm trying to think of something that would be a good fit for GPGPU, but the only thing that springs to mind is physics updates on large numbers of simple objects. This isn't surprising since GPGPU physics is what PhysX already is.

    Sounds pretty shallow to me.

    http://www.sciencedirect.com/science/article/pii/S0893608011001687

    http://download.springer.com/static/pdf/789/art%3A10.1186%2F1471-2202-12-S1-P239.pdf?originUrl=http://bmcneurosci.biomedcentral.com/article/10.1186/1471-2202-12-S1-P239&token2=exp=1477188428~acl=/static/pdf/789/art%253A10.1186%252F1471-2202-12-S1-P239.pdf*~hmac=72201c57f6f8e76d21a83a6d56b36375b518c67ecf65afc497d24a3395471963

    http://queue.acm.org/detail.cfm?id=2484010


    Execution Unit, Compute Unit, and Streaming Multiprocessor

    These are core to the desktop user.

    What are the non-desktop bound networking tools you plan to utilize?
    Duplex, multiplexed? I mean, really the information heading out is bound to the NIC as data bits. The data format must be handled by processing that is intelligently designed as far as the desktop user in concerned. Network side can be ARM, PowerPC, etc etc.
    So ultimately, if you're talking about GPGPU work on the client side (which I'm going to presume you are, from which you said 'These are core to the desktop user.') then GPGPU is rather limited for us, as GPGPU stands for General Purpose Graphics Programming Unit and it's that second part which is rather important in this case. You see most GPGPU work you see online and in academia is 99% of the time not using the GPU for it's original intended purpose...GRAPHICS RENDERING. And therefore it's like untapped potential of raw computation power available to them. Although GPUs have almost insane levels of compute power over the likes of CPUs they're ultimately stupid and by that I mean CPUs are like humans, we may not be very quick at mathematical calculations, but we can do a lot of management of memory, tracking of irrelevant stuff and lots of other clever stuff, and GPUs are like computers, they can do millions more mathematical computations than us but it's not like they can make me toast (awaits someone posting me of a computer that can make toast :P, or possibly a video of the Red Dwarf toaster).

    So when they talk about GPGPU what they really mean is doing massive number crunching using a cheap piece of hardware that is going unused (cheap in relation to buying a beasty CPU to simply crunch numbers). So ok, we've got a GPU and we can do compute on it and those articles you linked show some options: AI Neural Networks x2 papers, and Audio x1 paper. So audio on the GPU, I'll be honest I don't get why this is a thing, I need to research that, but AI I get, it's a lot of state tracking and number crunching which I guess a GPU could handle if the functionality it was required to do was simple enough. However neither of those is Networking; and networking is really not number crunching, yes there's a lot of data, but it needs a CPU to really work out what's going on with that data, because it's not like we're getting a tonne of data which we can bang away at with using the same computational function, it's more likely each piece of data needs to be handled with intrigue, evaluating what needs to be done with it and sending it to the right part of the system to be converted into something useful (I'm not a network programmer, so don't expect specifics; last time I did networking and concurrency I was at Uni and it's fair to say I was amazing!!! that's possibly not, maybe, definitely a lie).

    But the main reason we're unlikely to be delving massively into GPGPU in SC is simple, we're using the damn thing, the GPU that is, and it's not like there's spare compute going unused; I know we're working the GPU hard in SC, because the underside of my desk at work can get uncomfortably warm when it's running. I mean there possibly is some spare computational performance here and there, but we'll probably start filling that up with GPU-Particles which fit quite nicely with the Rendering work we're already using the GPU for.

    However if you'd have said we could use GPGPU on servers then that would have been a different conversation and I'd have started with.....well you didn't ask that question, so I'm not going to answer it :P. You did ask about non-desktop bound networking tools that we planned to use and quite frankly I'm not the man to tell you that, you'll have to wait for a network engineer to wander by your question and provide you with the answer you desire.

    G
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    @MCorbetta_CIG

    I'd just like to ask about horizontal displacement in planet generation, is it something CIG is looking at? Thanks!

    Hi,
    I had horizonal displacement in one of the early prototypes, but was causing complications with collisions and elevation queries, so I took it off; I might look at it again at a later stage...
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    With the powerful planet editor setting the expectations so very high i wonder how CIG plans on accurately recreating the geographical features of mars and of course the big one earth where every citizen will try to land their spaceship on the few square meters where their real life house is.

    There is a lot of data available of the geographical details of earth but is there a way to feed it into the planet editor to prevent having to do the impossible task of handcrafting the whole planet.

    Hi,
    In the latest ATV Frankfurt we have already shown a couple of planets where the high level geography was fed into the planet editor. There are different layers of generation:
    A very rough, low resolution continent map is defining the broad shapes of the planet and ecosystems placement. At the next layer, the ecosystems are defining the general terrain shapes and features over a few kilometers wide area. One thing to notice is that for example Homestead was using only 5 distinct ecosystems; these ecosystems are reused all over the planet, mixed and combined procedurally at different scales and locations.
    At a more granular scale, there are groups defining the generation of trees, vegetation, ground cover etc. These are also procedrually spawned on demand based on the players location, and are following the ecosystem shapes and features.
    Materials, ecosystems and assets are shared as much as possible across different ecosystems and planets where it makes sense.
    Then with PlanEd is possible to customise and paint on top of these layers or start from a completely blank planet (memory usage remains the same).
    One can also make a completely random planet by feeding random data at all these stages, but would look uninteresting and too generic (hence one of the main points why we have expanded the system to what we have today...). Basically the system is trying to combine the best of both worlds, hand crafting and procedural generation.

    The data being generated at all different stages is pseudo random so it doesn't need to be explicitly sync between client and server.

    I would say though that our goal is not to accurately recreating the exact geographical features of an existing planet, but rather providing an interesting gameplay space that looks plausible and realistic, visually rich and fits into the Star Citizen's lore and ultimate goal of creating a living universe.

    Rgearding the generation and rendering of cities, this is a large area of research, I think our first prototype as it was already mentioned in the ATV will be based on the fictional "machine/engine"-based Arccorp planet...
  • MCorbetta_CIG

    Developer

    Posted:
    Edited: by MCorbetta_CIG
    Posted:
    Edited:
    [hide]

    Hi Marco, first off, congratulations to your team for the PlanEd tech.
    As an env.Artist I can truly see the work done and the immense potential of it. I'd love to work with that !!!


    Did you guys ripped off the vegetation generator attached to a ground material like the one we can use in Cryengine ?
    How hard will it be to re-integrate the road/river tool that would also blend into terrain and vegetation ?
    Congrats once again for acheiving the microdetail surface on planets, V1 was nice but thanks to this smaller scale this turns it into something viable, it just misses the vegetation generator for pebbles, grass and trees.
    Did you found a limit on how many multiple ground materials could be used on a single planet ?
    And what about a random prefabs scatterer ? How do you plan to make planets less generic and empty ?

    Last one, softbody physics on a broader scale ? As far as I know its only for ragdolls but what about clothes and environment ?


    Hi John,
    Thanks for your support; yes as mentioned during the vidoe, tress grass and ground cover etc. are work in progress, there is more work we need to do before we can have let's say a rich jungle environment on a planet; the default cryengine terrain and vegetation is pretty good but does not support the tech we need in order to have this working on a planetary scale; so as it was mentioned we didn't use any of the CryEngine ones - this because we had to support spherical terrain (since a player can go and land anywhere on the planet), seamless transitions from space, runtime generation, 64 bit positioning and longer term, planet rotation (so the vegetation rendering is going through the zone system instead of the default cryengine).

    The same goes for the rivers and road tools, they were not meant to work on a planetary scale, so they would require a lot of changes. For example we are still finding from time to time code locations that need to be fixed since the entire game and engine were assuming a flat world aligned with Z+, and this is not true anymore on arbitrary planet locations.

    Regarding prefabs scattering, we have the new Group tools in PlanEd, which we will probably talk about it later once we made more progress on this, but basically we are planning to generate on demand not only grass and trees but environmental particles, sounds etc. depending on the ecosystem and higher level planetary status and weather.

    I am not sure what you meant with your question regarding softbody physics; generally speaking, planets have their own radial gravity sphere, so physics objects should behave properly once inside the planet gravity sphere.

    Cheers
  • MCorbetta_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hey @MCorbetta_CIG / @BParry_CIG

    Given how the terrain system, and the vegetation alignment to terrain is now different from default CE, how exactly will vegetation interaction work? Will it still use the awesome merged mesh vegetation that was seen in Crysis 3? Or does the planetary scale / larger view distance impose some limitations there?

    It would be pretty amazing to see a field of interactive, weather affected, swaying, yellowed grain or jurrasic-park-2-styled tall grass in the new physically based renderer on a planetary scale.

    Best and thanks for any answers!

    Hi, yes would be pretty amazing to get the same tall grass as in Crysis3 on a planetary scale, but we might need to look into different approaches to achieve that...
    [hide]

    I have a question relative to the potential large water surfaces on a planet. I know planetary water is in the early stage of development as mentioned in AtV 3.11.

    I think some sort of this technique could enhance large water areas a lot.

    Hi, yes we have plans to change water color and waves strength based on water depth and other properties, and reducing tiling at distance.

    Cheers,
    Marco
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]

    I have watched a lot of ship landings and it seems that every time they land they go to a third person view. Would it be possible to have a display in the cockpit that we can use to see the landing pad or a camera on the back of a ship like the Freelancer for precision positioning as we land. thank you for your time.

    This is probably doable, but for that we'd need the picture-in-picture work that was discussed upthread. With that, external cameras become a lot more feasible. We might still want to add overlays or something to make it schematic-y, though, because (for instance) we optimize shadows to get the best possible resolution by having none of them calculated for the areas outside the player's view. There would either be no shadows in the exterior camera view, or there'd be a visible decrease in quality when it activated.
    [hide]

    [hide]


    Hi @Voyager_NL,
    This is actually something I'm pretty opinionated on - I think we shouldn't have two suns in the game, full stop. As far as I know, the light scattering could work out OK (I might be wrong on this, but I think it could be added together correctly), but all the additional pain that it brings is just terrible. A surprising amount of our frame time and offscreen graphics memory is spent on sun shadows, so allowing there to be two of these huge, dominant lights at once essentially means you have to have everything in the game drop shadow quality massively the moment they're more than a few degrees apart.
    Obviously everyone wants two suns, thank Lucas for that one I guess, but it'd be a shame if instead of having "planet of the breathtaking double sunset" we had "planet of the everything looks sort of bad".

    Would it be possible or make any sense for cases where binary stars are used, for design to agree to have one of the stars be much smaller than the other, and cast must shadows of a much lower quality/accuracy/cost for the lesser one?
    It's theoretically feasible, but bear in mind that even a dim light will look strange when you're facing almost away from the sun, and it's flooding your cockpit.
    [hide]


    Ouch... right in the feels man! No DoubleSunsetSky for us Luke wannabe's ;)

    I'm guessing you are the kind of guy that when you say it can't be done... it won't be done for at least a while!
    I already did notice in the Ark Starmap that binary starsystems weren't very common anyway... as: I couldn't find any! :D

    At least now we know why: Technical limitations. Funny thing is that Elite: Dangerous, in it's current version, also only uses the light of one star for shadows/color, even when in multiple star systems. So it seems to be a common issue for rendering shadows.

    Might this be something that could be added by DX12 MultiGPU rendering? A whole extra GPU for taking care of extra shadows?

    Funnily enough, I formed my strong opinions on the subject during the development of Elite. :D To my credit I resisted violence when the then-lead programmer told me we needed to flexibly support up to eight suns in one solar system.
    In theory you could hand over a lot of shadow stuff to another GPU, but we always have to have a solution in place that makes standard hardware look standard quality, so it's more likely you'd see us spend extra GPU on giving you more quality in shadows that we'd usually have turned down/off.
    [hide]

    When you guys finally tackle anti-aliasing, are you considering pursuing "advanced" AA methods or just traditional MSAA? (not an expert, but I understand that there are many new methods that aren't widely supported)

    I ask because IMO star citizen currently struggles from an image-quality problem without AA. There's an insane amount of detail in so many assets that can get obscured in a sea of non-AA 1080p jaggies.

    Hi @LordXenos,
    It's almost guaranteed that any kind of AA we go for is going to be some sort of weird post-process (probably temporal, to my dismay) system. MSAA might be the old basic standard, but in deferred shading engines like ours it would need extensive code changes to make it work, and even then would be a shocking memory and performance hog. Some of the newer techniques have clever ways to integrate MSAA data into all their other clever stuff, and I'd love to take a crack at that for the sake of pride and the more-ultra-than-ultra GPU setups, but time will tell.
    [hide]

    Question, not even sure this is the right part of "Ask a Dev" to ask this, but i figure since it involves programming, i'd give it a shot..

    Are there any plans to further flesh out graphical options beyond what we have now (i'd surmise so but i'd like confirmation)?
    And if So, how in depth will they be?

    since we're all PC gamers you folks know just how much we love to tweak things until they're juuuuuust right.

    I'm definitely in favour of detailed options. At the moment, people are doing all sorts of tweaks with CVars, but a lot of those are exposing stuff that should be automated, while not exposing stuff that ought to be configurable. We want to do a big house-clean on all that stuff.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hi!

    I couldn't help but asking about this:

    [hide]

    (probably temporal, to my dismay)

    What do you mean by "to my dismay"'? Is there any reason why you would prefer to use another method?
    Since it is used in Unreal and games such as INSIDE with quite a success, I thought it was pretty much the best option nowadays. It seems to be quite tricky to implement correctly, though, from what little I have read.
    I guess what I'm saying is, I'm dismayed that temporal turned out to be the winner here. It's very, very good when it's working, but so much of the development around it is people iterating on ways to filter out all the different ways that it breaks down, especially on alpha-blended geometry, leaving weird trails, ghost images etc. Just a pain, really.
    [hide]

    I wouldn't even need/want a PiP for landings, just a better graphic of my ship in relation to what I'm trying to land on. I do like the pitch/roll/height indicator on the HUD in Elite Dangerous. I know exactly what's going to happen when I manually touch down on yet another generic rock planet of theirs. I guess my question is; How close can you get to approximating that without being called out for ripping them off entirely?

    I take it you mean the little mini-self image that replaces the radar? I love that thing. Actually, the way we're approaching PiP etc, even if we did an exact copy of what they did (unlikely), we'd probably be planning to route it through the same interface that we want for video comms or external-cam anyway - we're working to generalise the render-to-texture interface so that everything from live cameras to flash videos shares a common pipeline.

    Programmer - Graphics Team
Sign In or Register to comment.