PROGRAMMING (Engine, API, Hardware, etc)

discolando

Developer

Posted:
Posted: -
This thread is for asking questions about the overall Programming that makes Star Citizen (and Squadron 42) a reality.

Please refer to this post here for more information about how the Ask a Dev section works.

And heed all forum posting conduct rules found here.
Jared Huckaby
Community Manager, Cloud Imperium Games
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Regarding to low overhead APIs: How are things going on with Vulkan/DX12 and what are - aside from the time it takes to implement - the most challenging aspects to get them done? Are there any blockers? (Like code freeze for a complete overhaul or similar things)

    Hi @Valdore!
    First a disclaimer: I'm not the guy doing this work, but I am using the old/new interfaces.
    I think the biggest issue with the transition to new APIs is old code. Looking through the codebase, you can definitely tell that CryEngine started back in D3D9 times, on single or dual core machines, and did a lot of stuff that was probably efficient, or flexible, or at least fast to develop at the time. So a lot of existing code accesses big global structures - one thing sets a shader to be used, leaves it set for other things that will go and check what else it wants done, eventually the global state represents the next thing that ought to be drawn, it gets drawn, something then makes the minimal changes to draw the next thing. In the new APIs, we want to be doing all this kind of thing on a bunch of threads, and suddenly having everything fiddling around with a single giant data table is a world of pain. The new interfaces (which slowly appear amongst my code when I do an update) try to... um... pelletise(?) the process, so render state is associated with an object, work to be done exists as a set of objects with appropriate render state on each, so they can can be built anywhere, on any thread, chunked together on another, and only come back to the main render thread to be thrown at the GPU.

    Hope that helps,
    Ben
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I've been re-playing Crysis 2 again recently, and I've noticed something that's present in basically every Cryengine game (SC included), so perhaps you engine wizards can answer this question:

    Why is Cryengine so. Damn. SMOOTH?

    For some reason, Cryengine games look and feel so much smoother than virtually every other game I play, at the same framerate.

    I have a 75 Hz monitor. Youtube videos at 60 FPS look smoother than most games at 75 FPS, EXCEPT Cryengine games, which look even smoother than pre-rendered YT videos.

    Hell, I'd argue that Cryengine looks somewhat smoother than many other engines at lower FPS. When SC or Crysis 2 are locked at vsynced 75 FPS it's such a joy for the eyes. Can't say the same for other games. CSGO for instance, doesn't give me anywhere near the same feeling of smoothness.

    So why is that? Is it some advanced frame smoothing tech at work, motion blur, a combination of the two, something else?
    I understand there is a lot of factors that go into this (frame pacing, vsync, double/triple buffering, mouse smoothing, etc etc), but still, Cryengine always sets itself apart when it comes to smoothness, at least from my experience.
    I would really love to know why if you can ask this. Thanks in advance :)

    Hi @Fushko,
    I'm honestly not sure... if I had to guess, I'd say it'd be the motion blur. For all that motion+transparency is the bane of my life, when motion blur's working right you don't see it at all and it makes the game feel excellent.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    @BParry_CIG
    Hey Ben, I had a question reguarding the text in this picture. No one I ask seems to be able to tell me what this error is. It takes a looking in the right place at the right time to see. Is it a normal thing or somthing we should be looking for. i'd love to know more about this feed. y6Rb-thzFPBXKs2gUFpU7PDLKTU1T_D0fliotG-S

    Sorry, @ArmoredCitizen, all I see there is a spew of text. I think maybe that URL is only accessible by the dropbox's owner.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]

    @BParry_CIG oops thank you for responding, looks like I messed that up. this one should work. source.png

    This is really one for a physics programmer to answer I expect. There's a game entity (narrowing it down to basically any single thing in the game) that's presumably trying to move to a new physics grid but isn't supposed to. Maybe it's an object that doesn't have physics properties?
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    What is going to be taken from Cryengine 5 ?

    D3D12 ? Voxel clouds ? Gpu particles ? optimised renderer ?

    Well, the first and last I know we're not taking, because we're doing it ourselves :D. Personally I've not even had time to look at the CE5 feature list, at some point you have to stop grabbing new tech and get the game done.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I'm wondering how it happened that the fancy red M50 engine parts behind the glass went missing. I know that recently it was made so that when an object is missing LODs, it will simply not be rendered in those LODs, but apparently there already was an LOD-0 for those engines, yet they are currently not rendered even at very close distance. Was their poly count still too high and negatively impacted performance?
    (The missing engine parts are really devaluing the M50 appeal, haha, and add to other current annoyances with this ship - like dropping out of boost/AB unlike any other racing ship.)

    Probably related questions are how the odd LOD switching of Connie Phoenix (low detail) and Greycat buggy (vanishing wheels) in the hangar happened.

    Also wondering about the Mustang Gamma cockpit closing anim lately having that pointless delay before closing. Really puzzled about how such things happen.

    Were all these deliberate workaround-like actions or unfortunate side effects of other changes?

    I can't speak to any of the specific examples you gave, but recently we've definitely made a switch in how we handle things. Previously, if something was faulty the engine would do its best to keep things looking good, now we've switched to a "break when it's broken" model (like, as you mentioned, dropping objects that don't have an appropriate LOD instead of rendering the nearest available). One thing that's shaken out of that, though, is quite a few cases where the LOD algorithm was doing something inappropriate, for instance an object would get attachments but would still be working from a poly density estimate without them. I know at least a couple of LOD estimate fixes are in the 2.4.0 branch, so hopefully some of what you're talking about is already solved.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hi. I've seen Mark Abent use a tool called Recode. It looks amazing.

    Do I have to be in a secret society to buy it? If so, which one? I think I'm on board.

    1) Yes, it's amazing.
    2) Looks like they're still at the point of giving out 30-day trials to "selected companies" - indefiant.com
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hey @BParry_CIG !
    I have been thinking about Temporal AA as of late and wondering if there are plans to have it integrate into a lot of effects work eventually so as to clean up their lower resolution nature over multiple frames / make them more temporally stable in movement. Effects such as Depth of FIeld, Volumetric Lighting, POM, SSR (if it ever works again! :P) etc...

    That is one thing that UE4 has done pretty well (it seems to be present as a clickable option or built into many effects there) and it honestly really helps clean up lower samples.

    So basically, TAA not just for IQ of raw screen pixels, but for the effects themselves as well?

    Thanks for any response!

    edit - btw this topic being a catch-all for just "programming" is kind of disadvantageous, as there are a lot of posts coming in here (or perhaps posts are being discouraged even) that have nothing to do with the work of the primary responder (i.e. a graphics dev!). Would it be possible to have separate programming, game programming, network programming, graphics programming threads? IMO, it would make much more sense and would pre-parse the posts!

    It's definitely true that UE4 has gone whole-hog with their TAA stuff. I get the impression that they've unified everything around it - Ali was telling me they don't even support dynamic lights with alpha blending, where they need it they just render alternate opaque and fully transparent pixels, then rely on the AA to fix everything?
    I doubt we'd be able to do anything quite so integrated, since we're not starting the engine from scratch with no game in development we can't just tear out a feature when it's incompatible with the greater technical vision. On the other hand, individual features might be able to line up with temporal techniques independently.

    As for splitting the programming threads, not something I have a say in. I guess you'd have to bug @DiscoLando-CIG?
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hello im not sure if this has been asked before but will star citizen support DX12 in the near distant future?

    There's ongoing work from those kind folks in Frankfurt to rework the engine structure into something that will benefit from D3D12/Vulkan, I don't know what their schedule looks like, but the plan is that once that's in place, we add in a/some backend(s) that make use of it.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]

    How do you detect collision with a PG planet surface?

    If you have two meshes, and they overlap, you detect a collision, that's the normal thing. But with a PG planet 'surface', there is no mesh, it is generated as needed. So since it does not 'exist', per say, there is nothing to detect a collision with. So, I don't really understand hoe you guys do it.

    I'm hoping for a simpler, easy to digest answer, for something that may not have easily have one. Care to give it a shot?

    Thanks for all the hard work, we all really do appreciate it, even if we do not understand most of it. :)

    Hi @steve-2001,
    Luckily I know this one and it's a simple answer - we make a mesh. :D
    Edit: to expand on this, the visual part of it is a procedurally-generated mesh anyway. The work is done on CPU, so the same system can output a physics-friendly mesh.
    [hide]

    [hide]

    [hide]

    Hello im not sure if this has been asked before but will star citizen support DX12 in the near distant future?

    There's ongoing work from those kind folks in Frankfurt to rework the engine structure into something that will benefit from D3D12/Vulkan, I don't know what their schedule looks like, but the plan is that once that's in place, we add in a/some backend(s) that make use of it.

    @BParry_CIG

    Is per eye rendering from multiple GPU's (1 GPU per eye) something that we can possibly expect to see from VR support since that is suppose to be something able to be enabled in DX12/Vulkan

    Hi @Neokolzia,
    GPU-per-eye rendering is definitely something we've discussed, though doing it naively leads to a lot of duplicate work (generating scene shadows etc on both cards) and otherwise causes load-balancing questions (Doing half the shadows on each card and copying them? Predicting what "half the shadows" even is, etc). Alternatives to that include doing both eyes on one card, but doing all the other GPU work for the scene on the other, which means if you're lacking power you're in a better position to do reprojection-based 3D instead of two full camera runs (though I'm not a fan of the idea). It's the kind of decision where we'd need the thing actually up and running to profile what works.
    [hide]

    Is it not true that if continued production of star citizen and sq42 on d3d11 will make it increasingly more difficult to adampt to the modern api's in the future ? Where essentially the entire game will need to be rewritten. To support them entirely.

    Hi @Partieplayin,
    Certainly if we just kept soldiering on with the existing engine design (which, to be honest, looks more like it was structured to be optimal on D3D9) we'd just be digging ourselves further into a hole. As it is, we've got an overview of what's going to change, what needs to be written in a multi-threadable way even if it's not actually possible to multi-thread it yet, etc. Bear in mind that a lot of the real benefits and difficulties of the next gen APIs don't come from new GPU features as they come from fundamental changes to the CPU-side code.
    To put it another way, we're aiming for a soft transition where we go from "D3D11 code dreaming of being D3D12 code" to "D3D12 code optionally pretending to be D3D11 code". There's maybe a handful of next-gen GPU features that we have noticed would make our render tech faster/nicer, I imagine that once we're up and running with a next-gen API we might start special-casing bits and pieces to take advantage of it where it's available. If we did that for several years, it might become unmaintainable, but by that point there'll be a lot less 11-class hardware in the market.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    The monthly report says that you're preparing to move from deferred lighting to tiled lighting. Do you have by any chance a handy link that explains tiled lighting for people who understand rudimentarily how deferred lighting works?
    Also knowing that deferred lighting was pretty much responsible for an entire generation of games with pseudo AA or no AA at all, how does tiled lighting fare in the anti aliasing department?

    I don't have any links on hand, but I can try to explain briefly.
    Conventional deferred works by drawing each light as a piece of geometry, for each pixel it touches it decodes the G-Buffer, calculates its contribution, and adds that to the output image. The drawback to this is that you're forced to read and decode the G-Buffer again each time a new light hits a pixel, and also the blending costs of additively blending into slow memory.
    Conventionally, tiled works by submitting the full list of lights in a buffer, then having each 8x8 tile work out what lights touch it in a compute shader. By working in 8x8 groups they're doing this culling with 64 lights in parallel, and they write the result into shared (ie fast, on-chip) memory. They then decode the G-Buffer once, run through the list they built doing the lighting (this time with pixels in parallel so 1 light at a time), add up all the results locally and write them out once.
    We've had trouble with performance on this: the culling still has to consider lights from all over the screen and the output of the culling tends to have a lot of false positives to be fast, so we picked up some tips from Dmitry Zhdan's presentation at GDC that suggests going back to drawing the lights as meshes, but only at low resolution to update the per-tile lists. Slower memory trades off for greater accuracy and less wasted work.

    As for antialiasing, the issue in both kinds of deferred is that you have to calculate lights per-pixel, but you want to do them per-sample where there's an edge. In conventional deferred that means building a mask of which is which, drawing every light twice to do both modes, and hoping the GPU would schedule the work nicely. With full deferred it's arguably easier, since you can just do the per pixel work in a loop, then have another loop where pixels that need extra per-sample work share it out among the ones that don't.
    On the other hand, the other big issue with AA and deferred shading is that the G-Buffer is a big chunk of memory. At 1080 it's at least 32MB, quadruple that at 4K, and multiply by the number of AA samples you have. There are some more esoteric forms of tiled shading out there that use far less memory, and the "decode G-Buffer" step becomes the "look up what material you're meant to be and calculate everything now" step, but that would mean upending half the engine, putting every material shader into one giant shader, decals have to apply like lights, cats and dogs living together, etc. Probably not a good idea for now.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]

    [hide]

    The monthly report says that you're preparing to move from deferred lighting to tiled lighting. Do you have by any chance a handy link that explains tiled lighting for people who understand rudimentarily how deferred lighting works?
    Also knowing that deferred lighting was pretty much responsible for an entire generation of games with pseudo AA or no AA at all, how does tiled lighting fare in the anti aliasing department?

    I don't have any links on hand, but I can try to explain briefly.
    Conventional deferred works by drawing each light as a piece of geometry, for each pixel it touches it decodes the G-Buffer, calculates its contribution, and adds that to the output image. The drawback to this is that you're forced to read and decode the G-Buffer again each time a new light hits a pixel, and also the blending costs of additively blending into slow memory.
    Conventionally, tiled works by submitting the full list of lights in a buffer, then having each 8x8 tile work out what lights touch it in a compute shader. By working in 8x8 groups they're doing this culling with 64 lights in parallel, and they write the result into shared (ie fast, on-chip) memory. They then decode the G-Buffer once, run through the list they built doing the lighting (this time with pixels in parallel so 1 light at a time), add up all the results locally and write them out once.
    We've had trouble with performance on this: the culling still has to consider lights from all over the screen and the output of the culling tends to have a lot of false positives to be fast, so we picked up some tips from Dmitry Zhdan's presentation at GDC that suggests going back to drawing the lights as meshes, but only at low resolution to update the per-tile lists. Slower memory trades off for greater accuracy and less wasted work.

    As for antialiasing, the issue in both kinds of deferred is that you have to calculate lights per-pixel, but you want to do them per-sample where there's an edge. In conventional deferred that means building a mask of which is which, drawing every light twice to do both modes, and hoping the GPU would schedule the work nicely. With full deferred it's arguably easier, since you can just do the per pixel work in a loop, then have another loop where pixels that need extra per-sample work share it out among the ones that don't.
    On the other hand, the other big issue with AA and deferred shading is that the G-Buffer is a big chunk of memory. At 1080 it's at least 32MB, quadruple that at 4K, and multiply by the number of AA samples you have. There are some more esoteric forms of tiled shading out there that use far less memory, and the "decode G-Buffer" step becomes the "look up what material you're meant to be and calculate everything now" step, but that would mean upending half the engine, putting every material shader into one giant shader, decals have to apply like lights, cats and dogs living together, etc. Probably not a good idea for now.
    Thanks for the explanation. I hope I did understand it right. This is all damn interesting and I feel like the underlying logic is actually very simple and intuitive, but sort of taken to the extreme.
    So, the real life result should be that tiled lighting gives a performance boost, but on the other hand gives anti-aliasing a hard time? I actually hoped to see AA in Star Citizen some time. Since you said that it's all very memory intensive, would it be another point on the list of things HBM has a potential to fix?
    Throwing more resources at a problem is always a bit like a fix. But often when you get new resources you find other people want them for something else. I reckon if the supply of memory suddenly doubled, a lot of people would want to double the resolution of all the textures instead (which would be problematic since that uses 4x the memory, but that doesn't mean I'm wrong).
    I think mostly the barrier to bringing back MSAA is a maintenance issue though. Once a buffer is MSAA, you have to maintain MSAA-ness correctly for everything in the pipeline until the point you go back to non-MSAA. Since CryEngine wasn't doing this through any sort of rigorous system as much as just having every subsystem know what the right thing was to do, it was fragile and is now probably ruined. So we'd have to fix that, add the actual MSAA code itself into the renderer, and find the memory somewhere. Not impossible, but also quite a chunk of low-priority work.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    @BParry_CIG
    Looking at the .pptx document presentation posted above on the tiled lighting (or geometry proxy lighting? What do we call this thing? :D), it makes mention of two distinct light types and how to work with it: omni-directional point lights and spot lights. The question is, how does this work for area lights (which the game currently uses everywhere)? Are there proxy geometries for those as well (rectangle, square, and bulb size) or are they done differently?

    edit: also as a heads up, something completely broke SSR a while back (does not project correctly anymore!)

    Previous patch: http://abload.de/img/starcitizen_2015_12_0ddsae.png
    Latest patch: http://abload.de/img/starcitizen_2016_05_0zcfpn.png

    Yeah, we're in a nice place there because rectangular lights are a perfect fit for that proxy geometry (ignoring the box on the end).
    We'll have to look at that SSR I guess.
    [hide]

    [hide]


    Throwing more resources at a problem is always a bit like a fix. But often when you get new resources you find other people want them for something else. I reckon if the supply of memory suddenly doubled, a lot of people would want to double the resolution of all the textures instead (which would be problematic since that uses 4x the memory, but that doesn't mean I'm wrong).
    I think mostly the barrier to bringing back MSAA is a maintenance issue though. Once a buffer is MSAA, you have to maintain MSAA-ness correctly for everything in the pipeline until the point you go back to non-MSAA. Since CryEngine wasn't doing this through any sort of rigorous system as much as just having every subsystem know what the right thing was to do, it was fragile and is now probably ruined. So we'd have to fix that, add the actual MSAA code itself into the renderer, and find the memory somewhere. Not impossible, but also quite a chunk of low-priority work.

    I'm not sure that's the priority it belongs to. If you were to ask me what the biggest graphical difference between the "Meet the old man" trailer or the close-up shots of "Pupil to Planet" and the actual ingame graphics is, I'd say it's the fact it doesn't have a million pixels jumping back and forth with every frame. (I assume the trailer was made with downsampling)
    Also HBA isn't just size, but also speed. Do textures really care about memory speed?
    My point was more that the G-Buffer issue is size, not speed. Maybe it's a little about speed, it's hard to profile that, but mostly I'm thinking that my size estimate was assuming we're on 32-bit textures when I might be forgetting one of them's at 64. We've also got two output framebuffers at 64 and 32, so that's almost doubled the estimate again if we're talking about preserving MSAA right through to the end of the pipeline.
    Another issue I just thought of: Parallax Occlusion Mapping is used everywhere, gives us excellent results that I can't argue with, but since it has no poly edges it won't multisample. We'd have to run anything that uses POM at per-sample frequency to get that, and we have a LOT of it. Not sure how you get round that actually.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I know they have just been announced last Friday, but I know some companies get early ones to test with. Have you guys got a NVIDIA GTX 1080 to test with yet?

    If we have, they ain't told me. Probably for the best.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    This has probably been asked before, but how do you feel about CryEngine for Star Citizen? If you had the opportunity, would you have chosen a different engine from the beginning, or is cryengine so amazing you couldn't fathom using something different? If this is stupid question, just give me stupid answer and I'll shut up.

    Speaking entirely as an individual, CryEngine has unlocked previously undiscovered reserves of rage and hate in my body tissues. There are decisions in there that, if I knew who made them, I'd have to challenge them to a boxing match. I'd lose, horribly, but it would be a matter of honour.
    That said, let's consider the alternatives. If you look at Unreal 4 now, it looks like an amazing option, fresh and clean and new, but at the point where this project started it wasn't publicly available so that would have taken some doing. Unreal 3, though I've not worked with it, seems far more bolted down to produce Unreal-3-ish games, graphically it depended on pre-baked lightmaps all over the place, and was getting a bit long in the tooth. Unity you wouldn't have been able to get a source code license, as far as I know. I last looked inside Source when I was a student, so I've no idea if it would have been a good idea, and I know nothing of ID-Tech stuff, so no opinion there.
    As it is, things worked out nicely. There's is a lot of nice stuff in there, we've got a load of ex-CryTek staff who can point out where it is, and they've clearly been making quiet plans against the parts that need to be rewritten for years.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    Wow, this blew up.
    [hide]

    [hide]

    [hide]

    This has probably been asked before, but how do you feel about CryEngine for Star Citizen? If you had the opportunity, would you have chosen a different engine from the beginning, or is cryengine so amazing you couldn't fathom using something different? If this is stupid question, just give me stupid answer and I'll shut up.

    Speaking entirely as an individual, CryEngine has unlocked previously undiscovered reserves of rage and hate in my body tissues. There are decisions in there that, if I knew who made them, I'd have to challenge them to a boxing match. I'd lose, horribly, but it would be a matter of honour.
    That said, let's consider the alternatives. If you look at Unreal 4 now, it looks like an amazing option, fresh and clean and new, but at the point where this project started it wasn't publicly available so that would have taken some doing. Unreal 3, though I've not worked with it, seems far more bolted down to produce Unreal-3-ish games, graphically it depended on pre-baked lightmaps all over the place, and was getting a bit long in the tooth. Unity you wouldn't have been able to get a source code license, as far as I know. I last looked inside Source when I was a student, so I've no idea if it would have been a good idea, and I know nothing of ID-Tech stuff, so no opinion there.
    As it is, things worked out nicely. There's is a lot of nice stuff in there, we've got a load of ex-CryTek staff who can point out where it is, and they've clearly been making quiet plans against the parts that need to be rewritten for years.
    How would all that compare to building a dedicated engine from scratch tailored to the game scope?
    With the disclaimer that I've never worked on a team that built an engine from scratch, it would probably have been a terrible idea. Certainly if you'd wanted the current scope, but even suppose you reined it in to something much more restrictive consider how much wouldn't have been possible to even start for months or years of work.
    Random example, there's a lot of talk about gutting the network serialisation to make it better, but working from scratch would just mean no network at all until that kind of work had been done. Worse, you'd probably have to build a rough-cut version of it first just so you could get people moving on the systems that had to talk to it. Along with that, no editor, no designer whiteboxing, no ability for artists to see what their work would look like in-game, or any really honest estimates of its performance.
    [hide]


    I dont know man. Given all the delays because of "refactorizations", SM issues, issues with large maps, multiple netcode revisions and what not, I am not 100% clear which option would have been more efficient to be honest. Hence my question.

    Not going to say there haven't been some unpleasant surprises, but a lot of those surprises are basically finding out that a feature you'd assumed was going to work off the shelf actually runs badly / is totally broken / was deleted years ago and replaced with a comment saying "this was busted for years". The real challenge when something's not right is talking yourself out of just killing it and writing from scratch, and the reason you can't do that is it would take too long.
    [hide]

    I'm not entirely sure creating a custom engine would take longer. Unlike "general engines" that need to be versatile and usable by 3rd parties, they could easily create their custom solution that would, while not being usable for other games and by other teams, definitely be a clear cut for Star Citizen.

    Creating an engine is not that much of a problem when you're not trying to create a game engine for any kind of game someone might come up with. With the amount of money at their disposal, buying middleware required is not really something to be afraid of. They're creating their own physics engine anyway, so that's not a problem. They allegedly rewrote scene manager, so that's also out of the picture. Sound is easily bought/made, there are middlewares that would help with patching etc.

    I'm sure Ben can confirm that working on a brand new engine (asserting there is know-how on the team) wouldn't be much different than reworking a huge engine in all its relevant aspects.

    I disagree. Buying a stack of middleware from different companies and gluing it together, vs buying a license for an already-made engine, seems like a world of pain.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]

    [hide]

    [hide]

    This has probably been asked before, but how do you feel about CryEngine for Star Citizen? If you had the opportunity, would you have chosen a different engine from the beginning, or is cryengine so amazing you couldn't fathom using something different? If this is stupid question, just give me stupid answer and I'll shut up.

    Speaking entirely as an individual, CryEngine has unlocked previously undiscovered reserves of rage and hate in my body tissues. There are decisions in there that, if I knew who made them, I'd have to challenge them to a boxing match. I'd lose, horribly, but it would be a matter of honour.
    That said, let's consider the alternatives. If you look at Unreal 4 now, it looks like an amazing option, fresh and clean and new, but at the point where this project started it wasn't publicly available so that would have taken some doing. Unreal 3, though I've not worked with it, seems far more bolted down to produce Unreal-3-ish games, graphically it depended on pre-baked lightmaps all over the place, and was getting a bit long in the tooth. Unity you wouldn't have been able to get a source code license, as far as I know. I last looked inside Source when I was a student, so I've no idea if it would have been a good idea, and I know nothing of ID-Tech stuff, so no opinion there.
    As it is, things worked out nicely. There's is a lot of nice stuff in there, we've got a load of ex-CryTek staff who can point out where it is, and they've clearly been making quiet plans against the parts that need to be rewritten for years.
    How would all that compare to building a dedicated engine from scratch tailored to the game scope?
    Well, for the sake of giving this even a snowball's chance in Hades of being a realistic option, I'm going to mostly ignore the upfront penalties incurred in such a process (namely: the cost and the time) and assume that you (playing the role of Chris Roberts/CIG) are able to essentially conjure an engine out of your rear end in November 2012.

    So, with nil effort, you've got the Citizengine. Great, right?

    Well, maybe not. First of all, nobody apart from you (and in some cases, not even you) knows how this thing works. Your pool of potential hires contains exactly zero experience with the Citizengine, There are no tutorials, no guides, nothing to teach anybody how to use it beyond what you've written down and the time you're willing to spend sharing that knowledge face-to-face. For the first few years, you are the engine.

    Next, assume you've been extremely lucky, and have managed to avoid the things that make Mr Parry consider strapping on boxing gloves and thumping bemused German software architects. This, despite the fact that your goal is to make a game, not the tools to make a game, and your expertise isn't really focussed on that area. Congratulations, you now have a game engine. Being a piece of software that exists, it has bugs and weaknesses. Being a piece of software produced by you, it has bugs and weaknesses that nobody else knows about, have ever run into before, or have the faintest clue how to start solving. You and your team will have the honour of exploring the river barefoot and finding the crocodiles. Odds are that by the end of it, Mr Parry will have skipped the gloves and gone straight to semi-automatic weaponry.

    Of course, nothing ever stands still in the world of tech. You will have to keep not only your game but your engine updated as new features are brought out, drivers are released, platforms are promised and priorities shift. Hopefully you have the expertise by now to rewrite the whole thing for Linux, to adapt it for VR from scratch, and a million other things. If not, you now face the unenviable choice of shifting yet more resources away from your game, or ditching your promises completely. Neither option is going to win you any friends.

    At the end of the day, you look to perhaps wring a little recompense from the effort you've put into this engine project. I mean, you got it for $0, so anything is pure profit, right? Well, no, but that's rather academic. Nobody wants to look at, license, or have anything at all to do with the hulking monstrosity that shines when it's rendering Star Citizen...but is pretty much useless for anything else. You've honed it to a fine edge. Too fine. Turns out that what the engine accomplishes isn't really needed in most cases, and for those that is, the looming specters of Learning Curve and Getting Support stand like Scylla and Charybdis outside your office - and are just as effective at preventing passage. The barriers to entry are just too steep, especially in the already risky world of game development.

    Now the game's done, and people are starting to leave. How much use to them is years of experience with an engine nobody else anywhere will ever use? How much more use are you going to get out of it once Star Citizen is 'done'? By that point, it's nearly time to start the whole cycle over again - Citizengine is showing its age, despite your best efforts. There is, after all, only so much a game studio can really do while maintaining the focus on the game.

    Oh, and you just woke up from your dream - turns out that making an engine isn't free, and you have several hundred thousand people who are really, really, really excited to see you start making stuff. You've seen their photos. They don't look like the sort who are going to accept "Hey, we're 10% of the way done getting the physics engine alpha working!" as a status update. You think they are very impatient. Is that one holding a cricket bat? Oh god.

    (Yes, I know I'm not a dev, and thoroughly expect this post to vanish into the ether. It wasn't going to let go of my head until I wrote it out, though, so I claim duress.)
    Oh look, you just said almost exactly everything I did.


    "Odds are that by the end of it, Mr Parry will have skipped the gloves and gone straight to semi-automatic weaponry."

    Now how do I set my forum signature...
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    @BParry_CIG

    [hide]

    Yeah, we're in a nice place there because rectangular lights are a perfect fit for that proxy geometry (ignoring the box on the end).
    We'll have to look at that SSR I guess..

    Thanks for the response! That is pretty awesome to hear how it should be a good fit.
    Speaking of Area Lights in the new system, could this new sysstem also represent other more arbitrary shapes per chance? I think you guys have mentioned it before, but is there the intention to eventually rework artist palced lights based upon a catalogue of IES profiles tagged to actual assets with proper/standardised lux and lum settings? Some lights, light panels currently in the game (like the torch and ship togglable lights) seem to have massive bulb sizes / seem to be modifying specular in ways they should not. I remember that value being a CE value from at least 3.5.4. though... (artifically modifying surface specular via a light instead of via material).

    BTW, what does this exactly mean in the monthly report? Like, having emmisve channel textures be standardised or?

    We’ve also been finalizing our work on the ‘light linking system’ which allows light sources and glowing light-fittings to be linked together so that the brightness of the light fittings accurately reflects the realistic intensity of the bulb. This is crucial in getting the full benefit of the new HDR flare & bloom tech which we’re hoping to enable for the next release. The latest changes have refactored this to allow it work with the upcoming Object Container system.

    Hello again!
    Area shapes: Not really. Or at least, my changes don't help with it. "Area Light" in CryEngine parlance really means rectangle, and has some fairly fixed behaviour. On the other hand, we've got projector lights that project a texture. I'd imagine you could retrofit that into a class of light that takes some IES-sourced data as a texture instead, though I'm not sure how mathematically rigorous what I'm imagining would be.
    That thing where a light can be set to affect specular less than diffuse: We want it gone. It's too much of a liability when someone can set up an environment that looks nice in diffuse, then someone with a metal hat walks through the area.
    Light linking: This ties in with the new optics system. Since lens-flares will now generated by actual brightness in the scene, not hand-placed sprites, actual brightness in the scene has to be within an order of magnitude of the right value. So the idea is that the light looks at the emissive material it's linked to, and tells it how much energy should be coming out, and the material receives a multiplier. Also handy in that turning a light off will make the bulb stop looking like it's on.
    You'll be pleased to hear that, even though some tech limitation means we only support one of the light units (don't ask me which, I can never remember), Okka's reserved a field in the data structure that says it's that unit, so we keep the door open for doing things that handle angle-dependence etc more properly.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]


    Hello to you again as well! @BParry_CIG

    [hide]

    Hello again!
    Area shapes: Not really. Or at least, my changes don't help with it. "Area Light" in CryEngine parlance really means rectangle, and has some fairly fixed behaviour. On the other hand, we've got projector lights that project a texture. I'd imagine you could retrofit that into a class of light that takes some IES-sourced data as a texture instead, though I'm not sure how mathematically rigorous what I'm imagining would be.

    Aye! Just speaking theoretically about it, would the source texture of the i.e.s. profile just cover the diffuse projection and fall off? Or also the specular shape?
    I guess really the only usual suspect that is kind of missing would be tube lights in the current system (i.e. it has 360 influence like a point light but stretched). I guess you could represent that with two thin, rectangular area lights that are back to back atm though...
    I did a version of Unreal / Brian Karis's tube lights on Elite, they're pretty straightforward and would be compatible with this engine. Just need a few more hours in the day, is all.
    [hide]

    [hide]

    That thing where a light can be set to affect specular less than diffuse: We want it gone. It's too much of a liability when someone can set up an environment that looks nice in diffuse, then someone with a metal hat walks through the area.

    You know what to do, Ben. Use that newly found semi-auto firearm and cleanse that code. Horrible non-physically based parameters that artistis can abuse need to go! :D
    [hide]

    Light linking: This ties in with the new optics system. Since lens-flares will now generated by actual brightness in the scene, not hand-placed sprites, actual brightness in the scene has to be within an order of magnitude of the right value. So the idea is that the light looks at the emissive material it's linked to, and tells it how much energy should be coming out, and the material receives a multiplier. Also handy in that turning a light off will make the bulb stop looking like it's on.

    Fan-freaking-tastic!
    If that eventually could plug-in to things like bulbsize or light shape for diffuse and specular (i.e. go beyond optics and into the light's world representation) it would basically be an amazing standardised system for light assets.
    [hide]

    You'll be pleased to hear that, even though some tech limitation means we only support one of the light units (don't ask me which, I can never remember), Okka's reserved a field in the data structure that says it's that unit, so we keep the door open for doing things that handle angle-dependence etc more properly.

    awww yis. Angle-dependence? As in angle based fresnel / roughness?
    Angle-based roughness would be more of a feature for the receiving surface. What I'm talking about is that, with this initial implementation, the light emitter will look the same brightness from the side as it does from the front, which for spotlights may look a little weird. Stops you from doing the cool X-Files (X-Files is still cool right?) flashy-torch-in-the-dark thing too. The obstacle to making this better is that all the material info has to come through a fairly generic system, so passing just the brightness through is a lot more lightweight than sending everything you'd need to know about a light's orientation, cone angle, etc.


    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    Wow, I've been neglecting this thread.
    *theatrical knuckle crack*
    [hide]

    This isn't specifically about Star Citizen, but I could never find a satisfying answer, so hearing your take on it would be great:

    High FOV can be an advantage. In cases where functionality has priority over looks, it might be desirable to increase it a lot, but it then becomes impractical because every 3D game I have ever seen, when increasing FOV, leaves the edges intact and distorts towards the center, the most important area of the picture. How comes I've never seen a FOV implementation where the center remains intact and the picture distorts towards the edges, which is the totally common real-world effect of wide angle camera lenses?
    Would this be a significant implementation effort in a graphics engine?
    Is it not implemented solely because the regular FOV distortion is a necessary part of the engine anyway? Is nobody, not even devs of FPS, interested in implementing an actually practical high FOV mode that brings environment awareness closer to real-life's ~170° FOV?

    This is a hard one to answer, because it's not doing what you think it is, but I think I see why you think it's doing what it's not.
    First off, why do the edges end up bigger than the middle? The reason is that pretty much all games apply a rectangular projection to the scene, which maps everything to a plane (the alternative would be some kind of fish-eye effect I guess). Here's a sketch of a frustum, even with this fairly narrow angle you can see that the object to the left is coming out a little larger than the one in the middle.

    frustum1.png

    If you widen the FOV considerably it gets worse, but as you can see, it's the edges that are worse, not the centre.

    frustum2.png

    The interesting thing, though, is that it's not actually worse if you're sat in the right position. If you've set a FOV of 45 degrees, and scooted forward until the monitor fills 45 degrees of your vertical field of vision, your head's on the convergence point and the objects will be stretching exactly as much as perspective is foreshortening the monitor.
    So why does it seem worse? I think we've got a natural tendency not to let things out into our peripheral vision, so when you give someone twice the screen width, they just sit back a little so they can take it all in. Resist doing that and you're theoretically fine, though each extra degree of FOV is exponentially more expensive in screen area. Some people are making curved screens, but I'm not clear how you do the maths to drive one right.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I'm sure this has been asked a billion times, but now that consumer versions of VR headsets are finally getting into the hands of users, when can we expect the return of some kind of VR compatibility? Playing this game on my DK2 was one of the greatest experiences of my life... please sirs, may I have some more?

    [hide]

    I see it already asked here, but can't find an answer.
    When can we expect VR to be added? I backed partially on the promise of VR, and this game will be fantastic with VR support, but I worry since adding VR down the road tends to have issues with immersion, poor UI design and other things. It is something that has to be designed for all along or it won't work well.
    When will we have an answer about VR?

    [hide]

    Hey I just bought a Vive and I thought it was weird no one ever asks, so: when we can expect VR implementation to come? :u

    Hi @Saintt and @FloppyDonkey and @Huntokar, sorry to say I don't have any news for you on that front. Though I imagine the groundwork for that would be done in Germany, since a lot of Crytek's 3D implementation was done by one of the guys there.
    [hide]

    Hey Ben Parry! This ia great Q&A and I hope you can help me out here. Actually, this has been a question alongside the whole community (atleast on Reddit) for a while. With new API's becoming the standards in the future (DX12 & Vulkan), I wonder what CIG's approach is on this topic.

    In my opinion and I see that echoed in the SC community as well: We want a future-proofed game which has the best of the very best graphics, in our eyes this can only be done with early intergration of DX12/Vulkan so that you know what the limits are and that you can build around that.

    On a sidenote, what are your thought on future-proofing? How are you actually future-proofing? For example, are you already working with 8k textures right now but only allowing 2-4k textures in game now, so you can activate 8k textures when the hardware allows it (in 2020 or so). Or do you have other plans for future-proofing the game? Like updating the engine constantly with an dedicated engine team (which you already have!), just like EVE Online and Entropia Universe which updated their games quiet a bit in it's products life cycle. What are you thoughts on this? Remember, I love the small nittygritty details! Thanks a lot of doing this! :)

    o7

    Hi @Typhi,
    The new API stuff is a pretty slow process, there's a lot of stuff sat around in the engine that barely made sense from a D3D 10/11 point of view, let alone 12/Vulkan. To put it simply, the new APIs gain performance by being very explicit and requiring you to know exactly what's going on at any point. Much of the existing renderer code achieved flexibility in the past by having no idea what's going on at any point, and storing a lot of state in a big central pile which makes it nigh-impossible to multi-thread. What we're seeing on the UK end of the process, therefore, is a slow whittling away at these obstacles, new structures appearing, emails telling us to tell them what rendering state we want, rather than talking to the big pile. It's not a fast process, but they're going deep and wide to make sure it's done right.
    Texture res future proofing, I'm not sure what happens in the art pipeline. I know we're at least sometimes producing oversize resources, but at the same time I'm not sure how important that is long-term. For instance, in any cases where we're using layer-blended texture libraries, so many assets will be looking at the same textures that one would hope you could re-author a library from scratch and see improvement across the board. You'd be better asking in the art thread though.
    As for a "dedicated engine team", I don't think anyone really knows exactly what steady-state development will look like when it arrives. Certainly our team has a list of requests and issues in the backlog that we could chew on for years. Ideally we'd have them all done by yesterday, or at least before Squadron comes out, but realistically there's stuff that won't make the cut, and we'll have plenty left to do post-release.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    Hello again, frequent flyer! Another monster post...
    [hide]

    @BParry_CIG

    I am not sure if you worked on it, but SSR is no longer broken and is working correctly in the latest build again (2.4h)
    starcitizen_2016_05_2gbsu8.png
    Still looks puffy and super imposed though due to the filtering and lack of other physicalised aspects :P

    Yeah, I was wondering about that. We end up with a fairly long lag between stuff we fix and you guys seeing it, so previous mentions were stirring vague memories that someone had pointed out some typos I made in a mass-cleanup (scene depth is sometimes in metres, sometimes in [0,1], I touched about a hundred files trying to make it clearer, and made it clearly wrong in a couple). I guess that was what fixed it? Would still benefit from contact hardening, and probably just not being applied until closer to grazing angles.
    [hide]


    Speaking of SSR, I recently played through Doom (which is fantastic btw, technically and gameplay-wise) and I noticed some great things technically. So I thought I would post some inspirational media. For example, the SSR in the game seems to be done post particles being spawned, GPU particles even, so it reflects them. It gives a great sense of area lighting from them. Here some .webm examples:
    Example 1
    Example 2

    The SSR also seems to have a very long cut off angle, so you very often do not see it blending in and out at the edges of the screen or the bottom of the screen like in other engines (UE4 or CE for example).

    While these specific shots look good, I'm not keen on the idea at all. The way SSR works, it can only work out a hit on an opaque object, so what you're seeing there is likely to be the ray hitting a back wall, but then showing you the smoke that had been drawn in front of that wall. I can see how it would work in enclosed spaces, but imagine it would start doing very odd things in wider space.
    [hide]


    Particles recieve shadows from any and every light source (not just the sun). Though they are just billboards so they rotate thus breaking the effect. Perhaps is there a way to increase the 3d-ness of even a billboard particle?:
    Example 3

    Unhappiness about particle lighting came to a head recently, Ali went totally off-piste (as is a lead's prerogative) to improve things. I think you're going to be pleased with the outcome, though that's 2.5 at the soonest. He even put a fan with a light behind it, because apparently that's what all the cool kids are doing these days.
    [hide]


    Motion blur is extremely smooth, artifact free, and very configurable (check out the motion blur on the chainsaw blade here):
    379720_20160518194716vxq59.png

    The TAA option (8XTSSAA) looks really good even in base 1080p with configurable post sharpening (hard to capture this in a screen or video of course). Really works well with high contrast even:
    doomx64_2016_05_21_12i5xgh.png

    And they also do an interesting thing for the holograms.. where instead of being a transparency, it is actually just a screen space dither sampled model with an emmisive texture on it. Since it is not transparency, it means you can apply object motion blur to it (making it animate really smoothly):
    Dither sampled model: vlcsnap-2016-05-14-18chsw5.png

    Motion blur applied to it (its right hand): vlcsnap-2016-05-14-18imsve.png

    We still need to look at AA and motion blur. We spent a bit of time squinting at the hologram tech, we've decided we can't quite decide how they're doing what they're doing, and we'll have a proper look later.

    See you again soon, no doubt ;)
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    So what is the biggest challenge on the networking side of the game to bring the server FPS up in the Crusader / mini-PU area? From my understanding, the low FPS on Crusader is due to a handicap on the server side and not the client side.

    [hide]

    Is this the right thread for networking questions? Seems the closest fit but I'm not sure.

    Why is SC's FPS locked to the server's performance? And is there work underway to decouple it? In all other MP games I've played your client's rendering of the game isn't slowed down by the server, if the connection is bad or even broken I still have 60 fps but enemies stop or similar, networking information isn't being updated as it should but the game is still rendering frames with the information it has. The game isn't very enjoyable at 15 fps which is what most servers end up at before crashing. I've at best had 40~ on a completely fresh server and I usually have 30-20 fps with my GPU being mostly idle and one core of my CPU being pegged.

    Hi there @Lock_Os and @ChrisK3,
    Note: Not a network engineer. But since no one else seems to be answering...
    As I understand it, the server has some unknown performance issues, it's expected to be slow but is instead even slower. However, the low framerate you're seeing on the client apparently isn't the client matching the server (the server is very slow), but more likely the client receiving too much data about things it really doesn't need to hear about, and taking a long time to handle that. I'm not sure whether people expect the handling to be made faster, but I do believe some of the same tech that's meant to smartly stream areas is also intended to smartly not tell clients that a bin fell over a thousand kilometres away.
    Take this all with a grain of salt, though, I'm probably the least appropriate flavour of programmer to talk of such things.
    [hide]

    Is there any plan on applying Tessellation in between at least SOME LoD stages of ships? Instead of Tessellating the whole thing which would probably cost too much in terms of performance and not using it at all which IMO destroys the immersion since the current implementation of LoDs is ... rather aggressive to say the least (maybe a bug?).

    In general: why isn't tessellation used more often in such scenarios. You wouldn't need to go overboard with it, potentially crippling the performance of some graphic cards out there but at least a little bit smoother transitions wouldn't hurt, and I don't really see the necessity to use too much tessellation on architecture close to you (unless it has round edges) since other techniques like Parallax mapping might just do the trick, but not using it on something like LoDs never made sense to me since I saw its greatest potential in that area.

    Is there something I'm getting wrong? Are you already planning on using it for that purpose? Is it not supported to that extent by the engine? Or is it simply too taxing even when used sparingly?

    Hi @Joma,
    I'm not sure what the official line is on tessellation, I know it's not something that we use massively though. The main reason we wouldn't use tessellation on LoDs, though, is that we'd probably start creating too many closely-clustered triangle edges if we weren't careful. It's not a widely known fact, but when a pixel is drawn, there's a whole block of four that are forced to draw. Put lots of single pixel triangles in an area and you're looking at 4x the shading cost.
    The LoD situation at the moment is dire, though, I agree. This is partly the render team's fault, replacing a system that was transitioning LoDs at fixed distances but not necessarily delivering performance, with one that calculates transition distances based on poly density. Now the art teams are playing catch-up to work out why certain LoDs think they should drop out almost instantly, and we've given them a bunch of new debug screens to help diagnose things. It's also partly problems on the art side - sometimes LoDs have been generated automatically, but the tool isn't great at working out what features will be important at distance and tends to make a horrible mess of it.
    [hide]

    hi, any plans on adding (SVOGI) like Kingdom Come: Deliverance global illumination? im a backer of this game and after this change in lightning this game beyond anything i saw to this moment but makes it sence for sc? (lightsources etc)

    0.jpg

    Hi @MingX,
    I think I talked about this before, but it might have been in the now-dead thread. The short answer is it's probably a bad fit for us for two reasons:
    1) "Static" geometry. It builds the voxel grid over several frames by only voxelizing static geometry. We've already had several problems with things that the engine thought would never move, but that we've now built ships out of. So out-of-the-box, it would probably take a little work and then only apply to stations, then with a little more work might apply to the interior of the ship you're in, but would be a monstrous job to make work for the full outdoor environment.
    2) Technical debt. I know for a fact the SVOGI implementation has some subtle bugs in it, because it has copy-pastes of code from the tiled lighting system where I already had to fix them. We just can't know how much more of it doesn't really work, and trying to use it could suddenly swallow a man-month or two that we don't have right now. It's tempting to think of engine features as "flipping a switch", but it's more common that only specific subsets of engine features really play ball, even in the unmodified version of the engine.
    I absolutely agree, though, that some kind of GI solution would be a major lift in quality, and besides the glass shader it's the thing I most frequently bug Ali about. I just don't think it's necessarily SVOGI that will save us.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Can Deep Learning Technology be applied to all aspects of the Star Citizen Universe?

    0.jpg


    For example:
    1. The direction of Star Citizen Lore involving construction/destruction Cities, Governments, Civilizations, planets, stations, political/religious, socio/economic constructs, etc. For Example, pirates try to take over Port Olisar, Chaos Ensues and the stations computer malfunction causing the station's orbit to decay eventually entering Crusader's atmosphere and burning up.
    2. The conceptualization and design of all content such as ships, stations, buildings, environments using all data pooled from the large databases like the internet. For example: New Ship manufacturers will start to pop up with completely new and unique designs ships all created by the Deep Learning Computer so all now content creations becomes automated!
    3. The AI of NPC's so we could actually carry on a conversation with them and they are actually doing things that really affects the universe such as living out their lives and careers just like the real players.
    4. Allowing for Procedural Generation of Animation by feeding deep learning computer with animations from all over the world and then allowing the computer to generate unlimited numbers of unique animations that the player's character will naturally utilize based on the environment around them, if for example a player has to go prone underneath a pipe, they simply navigate underneath the pipe and the player character will automatically use it's deep learning AI to determine the best animation to go prone and crawl under the pipe, and/or there's 30 people aboard a spaceship walking in one of the corridors, but each one has a completely different way of walking, some swing their arms, while some walk hunch over, other walk with a proper stance, etc, and as you walk past them, some will look at you while others move out of the way, and some will lightly push you to the side to avoid ramming into you as they are in a hurry to get to the next place.
    5. Deep learning of movie/story/historical/plots so that we can proceduraly generate unique scenarios and missions for throughout the universe. For example: one player will end up playing the role of Han Solo trying to Save a Damsel in Distress, while another will be like Captain Picard that touches a beacon and is taken to another dimension where we lives a whole life in a split second to learn about a civilization's culture before it was destroyed, etc.

    I believe this would greatly add to the immersion and improve productivity time and add more diversity on the aspect of content creation. If this would be implemented that would be so amazing!

    In other words we would have:
    1. Lore Generator
    2. Ship/ Content Generator
    3. Animation Generator
    4. AI Generator
    5. Mission Generator

    All Automated by a Deep Learning Computer!

    Hi @SolaraSolarwind,
    As far as I understand neural nets, you're better off with something with very clearly defined output parameters. The kind of open creativity you're talking about, I'd imagine is decades away. I'd love to be proved wrong, of course, but I think for us to pick up something like that, we'd need some PhD students to scout the way and publish something on the subject.
    [hide]

    With all the amazing technology such as, procedural generation, and new possible lighting techniques, will we be able to experience this level of graphics in Star Citizen especially on the planetside?

    Will we have clouds like this in Star Citizen as shown in this video?
    0.jpg


    Will Cities look like this?
    0.jpg

    Clouds: Probably not at that fidelity. I'm actively working on a cloudy thing again, and one of the big issues is the resolution we can hit. Hopefully we'll release a tease video of it or something once it's looking a bit less rough.

    Cities: Speaking only from a rendering perspective, we'd need a really good antialiasing solution for that, would definitely need a GI solution for that much matte white to look appealing, and hopefully the environment team would add some big objects to break up the sight lines so you can't see too much to render at once. Besides that, it's an art question, really.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]

    Wow, I've been neglecting this thread.
    *theatrical knuckle crack*

    [hide]

    This isn't specifically about Star Citizen, but I could never find a satisfying answer, so hearing your take on it would be great:

    High FOV can be an advantage. In cases where functionality has priority over looks, it might be desirable to increase it a lot, but it then becomes impractical because every 3D game I have ever seen, when increasing FOV, leaves the edges intact and distorts towards the center, the most important area of the picture. How comes I've never seen a FOV implementation where the center remains intact and the picture distorts towards the edges, which is the totally common real-world effect of wide angle camera lenses?
    Would this be a significant implementation effort in a graphics engine?
    Is it not implemented solely because the regular FOV distortion is a necessary part of the engine anyway? Is nobody, not even devs of FPS, interested in implementing an actually practical high FOV mode that brings environment awareness closer to real-life's ~170° FOV?

    This is a hard one to answer, because it's not doing what you think it is, but I think I see why you think it's doing what it's not.
    First off, why do the edges end up bigger than the middle? The reason is that pretty much all games apply a rectangular projection to the scene, which maps everything to a plane (the alternative would be some kind of fish-eye effect I guess). Here's a sketch of a frustum, even with this fairly narrow angle you can see that the object to the left is coming out a little larger than the one in the middle.

    frustum1.png

    If you widen the FOV considerably it gets worse, but as you can see, it's the edges that are worse, not the centre.

    frustum2.png

    The interesting thing, though, is that it's not actually worse if you're sat in the right position. If you've set a FOV of 45 degrees, and scooted forward until the monitor fills 45 degrees of your vertical field of vision, your head's on the convergence point and the objects will be stretching exactly as much as perspective is foreshortening the monitor.
    So why does it seem worse? I think we've got a natural tendency not to let things out into our peripheral vision, so when you give someone twice the screen width, they just sit back a little so they can take it all in. Resist doing that and you're theoretically fine, though each extra degree of FOV is exponentially more expensive in screen area. Some people are making curved screens, but I'm not clear how you do the maths to drive one right.
    I've tried to ask Brian Chambers this a couple times, but haven't had any luck getting it addressed so far. However, it ties in neatly to your post so third time's the charm, right? :)

    -----

    I'd like to know if you guys are going to implement Nvidia's new multi-projection capabilities, for surround and for VR? I'm personally not that interested in VR, but I've been gaming in surround for a long long time now and this has been an issue the entire time. I'm thrilled to finally see it being addressed and I'm really hoping it'll become widely supported.

    Simultaneous Multi-Projection Pipeline

    aqiXyBB.png

    Nvidia said that part of the magic behind Pascal's rendering performance is a new technology called the Simultaneous Multi-Projection Pipeline. Nvidia explained that traditional rendering techniques use one single-view port to to output to displays. This works just fine with a single display, but we've seen a rapid change in display technology, from multi-screen setups to ultrawide displays and now VR HMDs with dual displays that require warping and unique rendering techniques.

    The traditional rendering methods don't play well with multiple displays. You generally see warping on peripheral monitors in surround setups, and warping an image for a VR display wastes performance by rendering parts of the image that are never seen. This allows the company to dedicate a properly proportioned scene to each display in a surround system. Huang said this was previously possible only if you had a GPU for each display in your system.

    For VR rendering, Nvidia takes this idea even further. It dedicates four view ports per eye for an HMD and prewarps the image before hitting the lenses. The end result is a clearer image with more accurate proportions. Nvidia calls this Lens Matched Shading.

    Here's a second blurb on it, maybe a better description and the picture shows the correction.

    The final item discussed was new technologies to help with displays. HDR output is now present, for those of you with ultra-high-end HDTVs, and Nvidia announced their new Pascal chips will support single-pass simultaneous multi-projection with support for up to 16 independent viewports. This doesn't seem entirely new, as Maxwell 2.0 already supports something like this with their multi-res shading and multi-viewport technologies, but apparently there are some differences at the hardware level that make the new method "better."

    One example of what can be done with simultaneous multi-projection was perspective correct surround displays. Normally, if you have a triple-wide display configuration, games treat that as a flat surface. What happens if you reposition the two side displays is you get a funky break in the view. Multi-projection allows games to adapt to the position of the displays to make things look correct.


    rHuzmLonkzwKAdvsefFptM-650-80.jpg

    Hi @Krel,
    Speaking bluntly, it would probably blow up dozens of systems we have that assume they only have one screen to deal with. As an example, post-processing effects like bloom and ambient occlusion tend to need immediate access to their surrounding pixels. If some of those pixels are on a different screen, things are going to get unpleasant.
    On the other hand, I know nothing about the API for such tech, so I'm only guessing based on other nVidia Magic that I've experienced in the past. It's clearly the right way to do wrap-around vision.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Edited: by BParry_CIG
    Posted:
    Edited:
    [hide]

    Hey Devs,

    Got a few questions -

    1. You mentioned a Tiled Lighting system in a post on this page. Can you go into detail on how the "Tiled Lighting system" works at a high technical level? I am super interested in real time and lighting large worlds or solar systems for that matter.

    2. Does CIG currently use or plan on using Hierarchical Level Of Detail for Star Citizen and SQ42? From my understanding HLOD is great for really dense objects (Such as a high population space station or a large city for that matter.)

    3. Are you working with any Augmented or Mixed reality devices for Star Citizen? Such as Hololens, Magic Leap, META 2 or CastAR. I can see these devices being hugely beneficial to making large fleet battles and boarding ops much more organized and having much more engaging fire fights with an "Eye in the Sky" as well as Information warfare aboard ships through hacking.

    Thank you for your time,
    HeadClot

    Hi there @HeatClot88,
    1) This time, I have something prepared, but it's actually just last time: https://forums.robertsspaceindustries.com/discussion/comment/6640284/#Comment_6640284
    2) My goodness, that website. I don't know if what we're doing qualifies as HLOD, but it's definitely hierarchical and a LOD. By the look of that article it's mostly talking about smartly partitioning huge meshes into small parts to build its hierarchy, whereas a lot of our ships and environments start out as hundreds of little pieces, so our system is mostly about spotting repeated patterns or near-together objects, and merging them into larger meshes wherever their individual poly count is low. Larger ships, there's still some uncertainty about the most optimal way to get one end high and the other low, without having to add too much extra tech investment. It may be something as basic as chopping them up a bit.
    3) I imagine you'd see VR support long before AR or MR support. I agree that AR fleet battles would be a cool thing, but you'd have to pull people off all sorts of teams to create UI and control logic for something like that, when they could otherwise be contributing to the core game.
    [hide]

    [hide]

    [hide]

    So what is the biggest challenge on the networking side of the game to bring the server FPS up in the Crusader / mini-PU area? From my understanding, the low FPS on Crusader is due to a handicap on the server side and not the client side.

    [hide]

    Is this the right thread for networking questions? Seems the closest fit but I'm not sure.

    Why is SC's FPS locked to the server's performance? And is there work underway to decouple it? In all other MP games I've played your client's rendering of the game isn't slowed down by the server, if the connection is bad or even broken I still have 60 fps but enemies stop or similar, networking information isn't being updated as it should but the game is still rendering frames with the information it has. The game isn't very enjoyable at 15 fps which is what most servers end up at before crashing. I've at best had 40~ on a completely fresh server and I usually have 30-20 fps with my GPU being mostly idle and one core of my CPU being pegged.

    Hi there @Lock_Os and @ChrisK3,
    Note: Not a network engineer. But since no one else seems to be answering...
    As I understand it, the server has some unknown performance issues, it's expected to be slow but is instead even slower. However, the low framerate you're seeing on the client apparently isn't the client matching the server (the server is very slow), but more likely the client receiving too much data about things it really doesn't need to hear about, and taking a long time to handle that. I'm not sure whether people expect the handling to be made faster, but I do believe some of the same tech that's meant to smartly stream areas is also intended to smartly not tell clients that a bin fell over a thousand kilometres away.
    Take this all with a grain of salt, though, I'm probably the least appropriate flavour of programmer to talk of such things.
    Thanks for the reply despite it not really being your field of expertise, though obvious follow up question: why isn't there an ask the devs thread for people with that expertise to answer questions? A bit weird that an MMO doesn't really field or answer multiplayer oriented questions.
    There is such a thread, but unfortunately it's this one. The community team are kind of in a bind here - they don't want to be taking people off code work during the day, they can't command people to do anything during the night, so it's just down to people who like to spend their free time talking about work. Programmers are often shy. I can't even use shaming tactics to drag one in here for you either, we're up on a floor that's all art and design teams.
    [hide]

    Hi Ben. Is is planned multi-gpu support for DX12?

    [hide]

    I can answer this, that's part of DX12, if CIG implements DX12 (which is a pretty safe guess) that will automatically come with.

    Unfortunately, that is not the case with D3D12/Vulkan. The new level of explicitness means that, unlike previous APIs where multi-GPU just somehow happens with very little ability for us to control it, we're now just told that multiple resources exist and will have to write code that knows what to do with them.

    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]


    Like, why is the A from Casaba offset? Or why is the casaba reflection rounded and curved and not straight? Or why does the SSR cut off even though there is more than enough screen space info for it to continue right up to the helmet edge?
    And yeah, contact hardening, vertecal stretching and all that, but we have discussed that before :D

    Kind of a guess on the fadeout: either it's a steepness-of-reflection fadeout, or it's because the ceiling is angle is steep with respect to the camera, and therefore the depth buffer values it's reflecting aren't accurate enough (each ray has to guess whether it hit on or behind an object, but it really is kind of a guess).
    The A being offset is weird, it could be a dodgy surface normal on that one floor panel. The overall curvature is probably a side effect of diagonal rays fired against a buffer of forward-aligned depths, the rays move forwards in discrete steps and a diagonal one might make it one step more or less before it intersects.
    [hide]

    [hide]

    Unhappiness about particle lighting came to a head recently, Ali went totally off-piste (as is a lead's prerogative) to improve things. I think you're going to be pleased with the outcome, though that's 2.5 at the soonest. He even put a fan with a light behind it, because apparently that's what all the cool kids are doing these days.

    This should be awesome looking and a huge benefit / win for the visual quality of a lot of effects I imagine (or just for scene coherency). How exactly does the new system for lighting and shadowing work if you do not mind explaining? Are billboards still shaded/lit/etc per-vertex? How pixelated / aliased are the shadow maps which cast onto them? In previous patches (2.4 and below) you could technically increase the fidelity of sun shadow maps cast onto particles (and probably particle lighting itself) by increasing the screen space tessellation factor for particles. For example here is the difference when adjusting the cvar "r_ParticlesTessellationTriSize" :
    Default value:
    lower_tessellation7krfn.png
    Tweaked (increased) value:
    higher_tessellation5pobm.png
    You can also see the difference with a small video I made (a lot less aliased flickering with the higher tessellation value)
    Wouldn't want to say too much as it's a new thing, but it's basically the old lighting system but with access to a lot more information (eg shadows) about a lot more lights. It's a lot more expensive if over-tessellated, but there are some changes to make it less dependent on high tessellation to look good.
    [hide]

    [hide]

    We still need to look at AA and motion blur. We spent a bit of time squinting at the hologram tech, we've decided we can't quite decide how they're doing what they're doing, and we'll have a proper look later.

    Yeah it is definitely a very interesting way of doing transparencies of that type. Quite convincing.
    [hide]

    See you again soon, no doubt ;)

    And here I am!

    Speaking of monster posts... the monthly report just came out and mentioned these things!

    While looking at the lighting we’ve also started to improve the quality of rectangular area lights. Real-time renderers need to make many compromises to achieve any form of area lighting in real time, but we’re hoping to improve the results of these because sci-fi environments so often use rectangular lights.

    Awesome! Any more details on what some of these quality improvements entail / which phenomena they cover? One thing I think that is partially true for Area Lights in SC (current released builds at least) is that they cannot cast shadows? Is that a proper assumption?
    Shadows on planar lights were seriously wrong, and had a layer of hasty fix by me that had totally broken them. They're now correct (to the extent that a single shadow map can be correct) for perfectly square lights. Making them right for non-square lights either includes unpleasant triangle-bending maths or changing how thew angle falloff controls work for every planar in the game, so based on the art team's goals we're likely to leave them be.
    Besides that it's not really a case of incremental improvement as it is trying to work out which, if any, calculations currently in engine have a basis in reality, then trying to find a replacement that doesn't break the framerate-bank. We've shelved that work due to more pressing tasks and also so that I can have a long (possibly alcohol-assisted) think.
    [hide]

    Completely tangential, but I noticed an interesting bug that occurs in the current build if you pump up the resolution to around 3840X2160. For some reason all forward rendered materials (like glass and even some of the environmental ship art using the cloth shader I presume) have these lines render on them:
    starcitizen_2016_05_2xksit.png

    Interesting, not something I'd noticed. It's clearly a tiled lighting bug, my money's on some lights not being assigned to tiles because the culling calculation's wrong somehow. Conveniently, we deprecated it already, and the rasterization approach shouldn't have those problems. If it does, 2.5 is going to be mighty ugly, though. Everything will have it.
    [hide]

    Which begs the question, given how the game uses a set of material guidelines for building assets... is there some reason why the cloth shader is used in ships for surfaces (you mentioned seeing it in the Idris as well) but not on cloth items for characters? IMO, the materials that use it in the Starfarer look quite a bit more plausible due to it. The cloth material in the elevator shafts right here look particularly convicing due to the rim lighting and the softness:
    starcitizen_2016_04_0w7orl.png

    Cloth is nice, but it's a forward-shaded standalone shader. Unfortunately, that means its output can't be replicated by the layer-blend shader which is widely used on characters. It's also more expensive, being forward-shaded, and different shaders necessitate more draw calls, so it's harder to be efficient with them.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I've noticed that the insides of helmets (in first person, of course) are being affected by outside lights, which looks weird and could be distracting in tense moments, since a reflection in your peripheral vision could be misconstrued as movement. Any plans or ideas on how that could be tackled?

    I'd be disinclined to solve that technologically. If there's something twinkly in the helmet that keeps picking up distracting illumination, it might be better to just paint it black.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]

    [hide]

    Hi Ben. Is is planned multi-gpu support for DX12?

    [hide]

    I can answer this, that's part of DX12, if CIG implements DX12 (which is a pretty safe guess) that will automatically come with.

    Unfortunately, that is not the case with D3D12/Vulkan. The new level of explicitness means that, unlike previous APIs where multi-GPU just somehow happens with very little ability for us to control it, we're now just told that multiple resources exist and will have to write code that knows what to do with them.

    It's exactly why I am asking. AMD is strongly pushing multi-gpu path, for their future DX12 cards. Can we have hope for supporting this?
    We've already invested quite a lot of work in making things interact with D3D11 multi-GPU, since while that "just works" it actually doesn't for anything that persists between frames. We'd be throwing away that investment if we just ditched multi-GPU in the future, so I expect we'll do what's necessary.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I was wondering if you could say something about the priority of implementing the features/elements below, if you've decided yet and can share any information.

    Also, it'd be great to know if some of these (and which ones) are not planned for implementation before the initial commercial release, or if any of them are not planned to do at all.

    Again, only if you know and can share the information. Thanks!

    - DirectX 12
    - Vulcan (in addition, or instead of proprietary and closed source DirectX 12, since nVidia now also support Vulcan?)
    - AMD's GPUOpen (instead of proprietary nVidia Gameworks, which actually harms performance on AMD cards)
    - VR tech (perhaps using AMDs LiquidVR?)
    - SLI & Crossfire support
    - GPU utilization of math intensive operations
    - CPU utilization of multiple cores & Multithreading (plus: how many cores will Star Citizen be able to utilize?)

    Best regards,
    Viking

    Hello again, @Viking!
    Discussion of D3D12 and Vulkan tends to be phrased as "next-gen APIs", the current work for them is API agnostic, and I'm not certain which of the two are favoured by the Deep Tech Wizards Of Frankfurt. Nor am I certain what their timescale is. I'd guess, though, that if it came to a choice between delaying features that would stop us releasing Sq42 and delaying next-gen API support to a post-release patch, the decision wouldn't be a hard one.
    GPUOpen and Gameworks are both middleware solutions that I don't think we have any plans for, given we're using an established engine with AO, shadow, etc tech already in there. If we decided to gut one of those systems (shadows, please be shadows) we might look at the middleware options to go into the replacement, but no one wants to hear we're gutting a major engine feature right now.
    SLI and Crossfire support we try to stay on top of continuously, there are some rough edges but we do try to keep it behaving right. Next-gen API support for multi-GPU, as mentioned upthread, will be a piece of work that naturally has to be lower priority than getting those APIs working.
    GPU profiling is something we also try to stay on top of constantly. It's easy for the numbers to creep up at the moment because we're so commonly CPU-bound, but we have a big screen showing what systems cost what, and have recently created a new debug screen that pins nearly every GPU operation to a specific team and location, so we should be better at quickly hunting down the culprit problem.
    Multi-core, from a rendering perspective, is one of the main things blocking the next-gen API work - too much global state means a lot of work's stuck on the render thread. That's what they're chipping away at. There's also a lot of multi-core querying, culling, and so on. It's a job-based architecture though, so without actually having looked at a graph of this stuff, I can't tell you how many cores wide it manages to get.
    What did I miss? VR? I don't know the status of VR.
    Programmer - Graphics Team
Sign In or Register to comment.