PROGRAMMING (Engine, API, Hardware, etc)

  • sailor67

    Posts: 12291

    Posted:
    Posted:
    @ABrown_CIG

    Just watched the latest ATV. Erin ( @Addis )was showing the new ships control system. Can you or one of your team mates explain how someone using a stick/HOTAS is supposed to interact with that? I genuinely don't know how that can work without a Track IR or equivalent.

    XIuGBuC.png
    html>
  • ABrown_CIG

    Staff

    Posted:
    Edited: by ABrown_CIG
    Posted:
    Edited:
    Hi everyone,

    There's quite a few questions here about HUD / MFDs, controller mapping, and networking, but you'll have noticed that Ben is our most prolific poster and I try my best to jump in where I can, but we're both part of the graphics team so don't have the knowledge to answer these questions. I'll poke some of the other programming leads to see if they have time to answer a few questions, but I just wanted to let you know that we're not ignoring you!
    [hide]

    I would love to know why DirectX and Vulkan are still an option for multiplatform games?
    When SC is ready to be ported over to Linux would it not be easier if you would already use Vulkan on Windows and do not even bother implementing DirectX 12.
    You would not spent resources fixing or implementing DX rendering stuff.

    Hi hyper159,

    Years ago we stated our intention to support DX12, but since the introduction of Vulkan which has the same feature set and performance advantages this seemed a much more logical rendering API to use as it doesn't force our users to upgrade to Windows 10 and opens the door for a single graphics API that could be used on all Windows 7, 8, 10 & Linux. As a result our current intention is to only support Vulkan and eventually drop support for DX11 as this shouldn't effect any of our backers. DX12 would only be considered if we found it gave us a specific and substantial advantage over Vulkan. The API's really aren't that different though, 95% of the work for these APIs is to change the paradigm of the rendering pipeline, which is the same for both APIs.
    [hide]

    What's the technical reason for the low FPS in the near-sun QDrive warp in today's 10FTC as seen here:
    https://gfycat.com/UglyLinearDog
    Just a lot of objects on the screen at the same time causing issues?

    Hi Hawxy,

    One thing to remember is that as Erin mentioned this video represents our 'visual target' rather than a specific star system, and was an artist led exploration of the type of gas clouds and color themes art want to achieve to add visual interest to the star systems. As a result it wasn't made to be efficient and hasn't been profiled by any tech-artists or programmers. It also wasn't using our volumetric gas cloud system, and instead was manually put together to show us one example of what they want to achieve more easily and robustly with the gas cloud system. Our existing use-cases for gas clouds have been quite different to the example shown here, but based on this and other artist reference work we're about to start a major upgrade in our volumetric tech in order to achieve this at a hopefully silky-smooth frame-rate!

    In terms of number of objects, when using our dedicated systems for asteroids & debris we can already handle over 100,000 individually moving objects on screen at well over 60fps, and intend to use an imposter system to handle the visualization of millions more in the background, so there's no concerns there :-)
    [hide]

    [Question]: Why does it seem like CIG is avoiding the most important questions? I see and hear all the progress of easy stuff that is being done that can wait to be done after the game has been released. Example: graphics detailing and animation improvements as well as gameplay improvements. If all of this can be done, then why don't we have a working release of the game????? I see no discription of progress on the programming development of the game. I don't care how small of progress has been done I just want to know what was done. This telling us what is going to be done is great but we need to know what was actually done each week. You are working on our money, so in theory we should be let known how small of progress was doe the week on the game progression.
    (What has been done in the week not what will be done in the future,like everyone talks about.)

    Hi Calgen,

    We do our best to update the community on both the work we're doing which can be seen in the monthly reports and Around The Verse (which takes a significant effort to produce). Also it's worth remembering that our programming team has many specialized departments, game code, animation, audio, graphics, engine, tools, UI, AI & networking, and it's not like they can just jump into each others code to help out. But I can assure you that all the teams are working hard and none of their work is "easy".

    Here's some links which shows our recent updates:

    https://robertsspaceindustries.com/comm-link/transmission/15790-Monthly-Studio-Report
    https://robertsspaceindustries.com/comm-link/transmission/15704-Monthly-Studio-Report
    https://robertsspaceindustries.com/comm-link/transmission/15786-Around-The-Verse
    https://robertsspaceindustries.com/comm-link/transmission/15778-Around-The-Verse

    So for example in my January report you can see the Graphics team started work on area lights, and in the February report they had made a great deal or progress and were then shown in this weeks Around The Verse. The progress of the other programming teams can equally be seen, so I'm not sure which programming department it is you're looking for more information on.

    Of course often the community want to also know what we *intend* to do in the future, as a good chunk of our previous work is already evident in our public released alpha, so we try to strike a balance between describing past and future work.

    Cheers,

    Ali Brown - Director of Graphics Engineering

  • VesperTV

    Posts: 8

    Posted:
    Edited: by VesperTV
    Posted:
    Edited:
    [DELETED, posted again below by mistake]
  • Paldren

    Posts: 2295

    Posted:
    Posted:
    [hide]

    Years ago we stated our intention to support DX12, but since the introduction of Vulkan which has the same feature set and performance advantages this seemed a much more logical rendering API to use as it doesn't force our users to upgrade to Windows 10 and opens the door for a single graphics API that could be used on all Windows 7, 8, 10 & Linux. As a result our current intention is to only support Vulkan and eventually drop support for DX11 as this shouldn't effect any of our backers. DX12 would only be considered if we found it gave us a specific and substantial advantage over Vulkan. The API's really aren't that different though, 95% of the work for these APIs is to change the paradigm of the rendering pipeline, which is the same for both APIs.

    I am really excited by this, considering the immoral ways Microsoft has been going with Windows 10 ... but does this mean that mGPU will only be an option under Windows 10?


    Source: https://www.khronos.org/assets/uploads/press_releases/2017-rel149-vulkan-update.pdf
    [hide]

    Native multi-GPU support for NVIDIA SLI and AMD Crossfire platforms
    – WDDM must be in “linked display adapter” mode

    Enemy%20Contact%20Signature_zpss1tiqgir.
  • Notavi

    Posts: 7

    Posted:
    Edited: by Notavi
    Posted:
    Edited:
    [hide]

    [hide]

    I would love to know why DirectX and Vulkan are still an option for multiplatform games?
    When SC is ready to be ported over to Linux would it not be easier if you would already use Vulkan on Windows and do not even bother implementing DirectX 12.
    You would not spent resources fixing or implementing DX rendering stuff.

    Hi hyper159,

    Years ago we stated our intention to support DX12, but since the introduction of Vulkan which has the same feature set and performance advantages this seemed a much more logical rendering API to use as it doesn't force our users to upgrade to Windows 10 and opens the door for a single graphics API that could be used on all Windows 7, 8, 10 & Linux. As a result our current intention is to only support Vulkan and eventually drop support for DX11 as this shouldn't effect any of our backers. DX12 would only be considered if we found it gave us a specific and substantial advantage over Vulkan. The API's really aren't that different though, 95% of the work for these APIs is to change the paradigm of the rendering pipeline, which is the same for both APIs.
    Excellent. I've been avidly awaiting news about how the Vulkan work is going (proud member of the Grounded Linux Navy here). I occasionally see glimpses of news about it in the monthly reports but since it seems to be proceeding on a when-it's-done timescale it doesn't make it into the weekly reports (which is fair enough, since from what I understand you're not yet ready to commit it to a particular release - though it would be lovely to have a section in that report discussing other background work that might be happening).

    The last news was a few months ago, and indicated that the team was mainly busy doing the engine re-organisation work needed to make the best use of Vulkan. Is that still where you're at, or is there some news to share on this front?
  • Dranor-Zylander

    Posts: 1633

    Posted:
    Posted:
    ABrown.CIG, will the first public Linux build have a code name? I'd like to suggest Sprinty Leopard lol.

    Someone should let Richard Stallman know.
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    [hide]


    Hi Dictator,

    Are you sure you don't actually work here? This summary is pretty comprehensive, even highlighting the major assumptions & flaws in each piece of tech :-)

    Haha. I guess my years of meticulously obsessing over pixels has had some benefits! *Cough Cough* If you guys are ever looking for an external graphics QA... *Cough Cough* ;)

    Seconding the impressed-ness with the summary. I'm meant to be writing something like that up before we make changes... I might just use this.
    [hide]

    [hide]

    -snip'd-

    That sounds pretty sound. Since you guys would be then making use of the LY-screen-aligned voxels, would you guys also use that for planetary cloud rendering (non-gas-cloud-type-clouds)?
    While the exact unification plan isn't nailed down, our rough view is that any system we have needs an answer for when it's inside the Froxel Fog's range, and an answer for when it's outside. We basically end up with three categories:
    1) In some cases, the answer will be that it's small enough to disappear once it's beyond the range, this is the way standard LY deals with fog volumes for instance.
    2) In other cases where Froxel Fog would work better, or another system that uses it may be present (eg will fog volumes exist at cloud altitude?) , we'd like the long distance solution to be able to export its own data into the froxel buffers, so that there's no discontinuity where the two solutions switch over.
    3) Finally there may be cases where the "distant" solution looks as good or better than Froxel Fog, and in that case the systems don't need to interact at all.

    Since planetary clouds are going to be visible at very long distances, we know they can't be solved only with Froxel Fog, so they'll fall into either category 2 or 3.
    [hide]

    [hide]

    By the way, I'm avoiding the word 'nebula' when discussing space volumetrics as that suggests a ridiculous scale that is pointless trying to represent as a volume as even at light speed you wouldn't see any parallax.

    Yeah, perhaps just "space dust" fits well enough ;)

    Lookin at the most recent monthly report, I definitely loved seeing @BParry_CIG 's work on the rectangular area lights.
    vlcsnap-2017-03-19-11pmxhq.png
    vlcsnap-2017-03-19-11nkzzo.png
    vlcsnap-2017-03-19-11ehy18.png
    It looks a lot better on character faces than the last model... especially the way diffuse propagates over small discrete features. Like underside eye lids. Is some of this work based on the eric-heitz work / unity stuff that has been going around? If so, the most recent paper also shhowed off how to make textured rectangular area lights. That could be interesting given the amount of signage in SC atm.
    I may have posted the video of Heitz's LTC paper to Ali's Facebook page under the heading "ALI. ALI LOOK" the moment I heard about it. As it happens though, we're not using it - while it's miles ahead of the pack in performance/quality tradeoff, it's still a huge cost (approx 1ms per fullscreen light on a GTX980) and we would then have no way to downscale on lower-spec machines given how different the scenes would look with them disabled.
    Instead, the diffuse component is closely based on Sébastien Lagarde's work for planar lights in Frostbite engine, and the specular component is somewhat based on the version we got from CryEngine, the reworking mostly consisted of taking it apart to find out which approximations led to artefacts, replacing them with alternatives, and exhaustively testing them for new issues.
    One thing that did come out of this rework, however, is that the planar lights now live in their own rendering pass. Given the time, it would be interesting to see if we could add some kind of "ultra mode" that replaces this pass with a version that uses LTC lights.
    [hide]


    I am curious then, what kind of work, if any, will be done for occlusion or shadows from such lights? So, for example, one cannot see the specular highlight across character eyes when the hood of their eyelid would perhaps occlude it. Obviously that is an open point of research for video game graphics, but unshadowed lights in general (even area lights) tend to look rather gamey in the end. Especially given how the human face needs good occlusion and shadows to cross the uncanny valley usually.

    I am curious as well, because some shots outdoors on planetside from the most recent AtV got me to thinking about specular occlusion in SC, as it seemed like some edges were highlighted from probes even though you would imagine that SSDO would directionally occlude that.
    vlcsnap-2017-03-19-11vjxss.png

    Specular occlusion and a better SSDO term are definitely on our radar. Unfortunately shadow maps from large area lights, as you mention, are a huge open research topic, so we're focusing on improving softer screen space techniques to take up the slack.
    [hide]


    And last question, I swear. Given the prevalence of space helmets with glass in SC, how will this figure in to DOF? At the moment DOF seems to act as if the glass is not there and blends it into the background or foreground depth of field, even though it should not from a realism standpoint (like below).
    31967381334_4781434e4rka89.jpg
    I know Ryse had some funky way of making transparency not have this problem ("depth fix up"). Is a similar idea going to be used in SC when you guys make use of LY's post processing for motion blur and depth of field?

    Anyway, thanks for answering any questions if you guys do.
    Best!

    We have two solutions to this! First, as you say, there's depth-fixup. We're already using that on hair, and (I think) on particles. In general though, the trick is to do depth fixup only on the parts that are opaque enough that they dominate the image, which is less easy to do for things like glass and holograms.
    The second solution is that we simply sort objects into two lists based on whether they're beyond a certain (situationally-varying) distance. Things beyond that distance are drawn before the DOF and motion blur calculations, things nearer than that plane are left blur-free. While this clearly isn't a perfect solution, nearby transparencies tend to be more problematic when they blur with what's behind them, and more distant transparencies are problematic if they remain crisp when objects around them are blurred. This second solution is already working at the render end, but we still need to hook it up to the systems that will control it.

    Keep em coming!
    Ben
    Programmer - Graphics Team
  • Senkan

    Posts: 57

    Posted:
    Posted:
    Hi,

    Thanks for answering so many questions in this thread. I was wondering about global illumination. The last time you talked about it, it wasn't locked as a feature but you started really needing it. Have your thoughts on it changed? From what you said the current implementation samples very sparsely placed points, I believe on planetary scale. What implementations and approaches are you leaning toward, if applicable? What benefits are you looking for?

    I'm just trying to pick your brains on this topic which is pretty interesting to me :).

    The best to you all, thanks for your great work.
  • MrBobarian

    Posts: 1

    Posted:
    Posted:
    Hi! We saw a sneak peak of your ID masking and general masking system in a recent ATV. I would love to hear a bit more about how that works. I would assume you make masks outside the engine and import. Also to my knowledge you could either use an ID texture mask to mask out different materials or save the material sets from your 3d package, how do you go about this?
    Thanks in advance ;)
  • elec

    Posts: 17

    Posted:
    Posted:
    3 short questions:
    - Will the game have explicit multi gpu support, later on? (it should, when we think about big simulator screens and stuff in the future)

    - Will we be able to change the simulationspeed of singleplayerparts like SQ42 or ArenaCommander on the fly? (to make epic looking videos together with the advanced camera options and TrackIR)

    - Will the game support nVidia ANSEL? (super-resolution screen/render-shot tool, here some examples from GR:Wildlands )

    thank you :)
  • Nekomimimode

    Posts: 18

    Posted:
    Posted:
    Hello,
    May we have a nice update about the patcher upgrade progress and schedule please?
    Thanks
    Nekomimimode
    Chief of Business Ventures (COO) Nekomimimode
    The Seraphim Regiment, Star Citizen Division
    Trade/Transport, Acquisition, Industry, Logistics, Freelancers, Racing, Sports
  • Oberscht

    Posts: 112

    Posted:
    Posted:
    Sean Tracy told me to e-mail my local graphics programmer about this. There was a video with him and Eric Kieron Davis where the question was if a lit moon would light up the dark side of a planet, like it does on Earth in real life. He assumed it was talking about proper raytracing, which of course wouldn't be viable for a real time application. But wouldn't it be possible to measure how much the side of a moon facing you gets lit up, and then use it as an appropriately intense light source?
    "Ignore the haters" - Philistine motto
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    did Vulcan ever fix the squaring of round objects? I know AMD's api runs both sets of hardware and supports virutual objects in memory better as the xbox uses a tolken ring bus for passing objects to memory, but I know that there were issues even on microsoft's direct x team using vulcan on AMD hardware that there were extra calls to objects that they stored in memory simply to add extra draw calls. The article might still be up on tech net but that is microsoft and it might be a direct x api issue not a vulcan api issue. I know open gl was a more open format that failed due to mainlining 3DFX ARB path verse Nvidia's ARRB path and AMD path was not even in use yet. Having worked on the hero engine (unreleased game they did not have the money) , unreal editor and unity (disney planes) I know most of them use multiple render paths to support code and I have to wonder if the vulcan engine is the reason the hanger is so blown out when you look at the shiny parts?

    Hi @VLOR, you'va sked several questions here and I don't understand all of them, so I'll pick over this and maybe you can clarify.

    did Vulcan ever fix the squaring of round objects?

    I'm not aware of a problem with round objects being squared, if you've got a link on the subject I'd be interested to look.

    I know AMD's api runs both sets of hardware and supports virutual objects in memory better as the xbox uses a tolken ring bus for passing objects to memory, but I know that there were issues even on microsoft's direct x team using vulcan on AMD hardware that there were extra calls to objects that they stored in memory simply to add extra draw calls.

    I should mention that it's not technically AMD's API anymore, when it became Vulkan it started being managed by the Khronos group. Your question seems to be about someone adding extra draw calls, either by accident or possibly to look better on a benchmark? I've not read about this, but since each manufacturer implements the drivers for their own hardware, they have an incentive to do the best possible job.

    I know most of them use multiple render paths to support code I have to wonder if the vulcan engine is the reason the hanger is so blown out when you look at the shiny parts?

    I can say for certain that this is not the case, since the engine is 100% D3D 11 right now. In general, though, we wouldn't expect any API to have recognisable visual differences at that level. If we support multiple codepaths in the future, our shaders will use some kind of cross compiling system so as to have one source file and guarantee that something like that doesn't accidentally get left out of sync. The shaders executed might be in a subtly different language, but it doesn't make sense for the maths they're doing to be any different.
    Programmer - Graphics Team
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    I believe he is talking about the squaring of shadows cast from objects that have "faked" round edges from tessellation rounding. The object would appear round, but the process that cast the shadow would still cast one from the invisible parts of the geometry. Thus, square shadows.

    I could be completely wrong about that, and probably am, as I am a ME major, not EE, but I remember one of my buddies talking about a similar problem he had to solve for a class project.

    Thank you, @Alix_Stone!
    If this is the problem in question, I can safely say that Vulkan isn't really a variable when it comes to solving it. Questions such as what to tessellate and how much to tessellate it, or even what detail level to render a mesh at in different views, are definitely higher-level concepts that we have to solve well before we talk to the rendering API.
    Programmer - Graphics Team
  • IggyPi

    Posts: 209

    Posted:
    Edited: by IggyPi
    Posted:
    Edited:
    I'm curious what database system run the persistence database ?
    12315184.png
  • Ender-jr

    Posts: 1096

    Posted:
    Posted:
    Hi Ben or Ali:
    Some of CIG's first stretch goals related to Sim enthusiasts.

    ACCOMPLISHED $4,500,000

    Star Citizen will launch with 60 star systems.
    Star Citizen will feature an additional playable ship class, the cruiser.
    All Kickstarter goals unlocked
    All backers before October 29, 2012 will start Star Citizen with a Class I Repair Bot in their garage.
    All backers before November 8, 2012 will start Star Citizen with 500 additional credits.
    Extended hardcore flight sim controller support: Flight Chairs, multiple monitors, Track-IR, MFD (Multi Function Displays) and more on launch.
    Star Citizen will feature four additional playable ship classes: Idris class corvette, Origin M50, Drake Interplanetary Caterpillar and destroyers.
    Star Citizen will feature two additional base types: Vanduul trading posts and hidden smuggler asteroids.
    Star Citizen will feature an additional alien race, the Kr’Thak.

    ACCOMPLISHED $5,000,000

    Enhanced boarding options: melee combat, heavy weapons, zero gravity simulation, suit HUD options and EVA combat.
    Increased ship customization.
    Tablet companion application to check on your inventory, commission or find missions and get the galactic news feed.
    The RSI webcast will feature a monthly Town Hall Q&A with Chris Roberts
    Squadron 42 will feature celebrity voice-acting including at least one favorite from Wing Commander and 50 total missions.
    Star Citizen will launch with 70 star systems.
    Star Citizen will feature an additional base type. Can you discover the alien derelict?


    There is a discussion in the Simpits Forum that could use some of your insight as to the progress on implementing these goals. If nothing else, it should show you some backer thoughts on the subject.
    https://forums.robertsspaceindustries.com/discussion/375846/support-for-multi-displays-and-mobile-devices#latest


    Always drink upstream from the herd.
  • cjohnson

    Developer

    Posted:
    Posted:
    [hide]

    I don't really post on forums much, i think this is like my 3rd post in 3 years for this game. I watched the latest around the verse dealing with multi-region servers. I'm a lower level network engineer for my company so by no means do I claim to know everything at all. I have one simple question that I don't understand after looking at your model.

    Why are you guys using TCP over UDP?

    Hi ShadowSpear

    We're actually using both. For the patcher and backend services, reliability is paramount and latency isn't really a concern (data must arrive but it doesn't matter if it takes a bit longer to get there) so both of these use TCP for network transport. On the other hand, we want to keep latency between client and game server as low as possible. Reliability is therefore less important since if data takes too long to arrive (due to packet loss and subsequent resending) it may no longer be useful. For these reasons client/server traffic is sent over UDP.
    [hide]

    Are there plans to add a congestion control layer to the launcher, apart from manually setting download speeds? (choices which really don't make much sense - there are very few choices!)

    Since the download is done over UDP, it easily congests a 1 Gbit network because there's no congestion control done.

    Protocols like the Micro Transport Protocol (µTP) could be used much like with torrent clients, which utilize the available bandwidth very well without creating packet loss for the user.

    The algorithm is introduced here; https://tools.ietf.org/html/rfc6817

    Hi Klexmoo

    I don't work on the launcher/patcher and don't know what the plans are for it but my understanding is that currently it uses TCP. TCP of course has congestion control and avoidance built in but these algorithms also try to play fair with other connections and share available bandwidth equally. As democratic as that sounds it isn't really what you want if you have a lot of data to download and want to do it as quickly as possible. The reason the patcher can saturate your network connection is that it opens multiple TCP connections at the same time and uses them to download different parts of the patch. Each connection gets its own fair share of bandwidth but because the patcher opens many of them it gets to take many slices of the bandwidth pie! All modern web browsers use the same trick to download pages from content heavy websites more quickly than could be done over a single connection. Of course browsers don't usually download quite as much data as the patcher so you never notice the impact other than faster browsing.
    [hide]

    Hey Ben! First time posting in the developers' section. It's more on the networking side.
    I've been doing some reading about network optimization and this journal came up (below). In terms of network coding to support a player count of say (an exaggerated amount) 500,000 players, is it that currently, there exists a huge trade off (correlation) between high continuous data rates from a server to client and the noise tolerance reduction that may occur? I do understand the netcoding side of it and how it affects the ping based on distance, but in terms of player count support and stability, does this article have any relevance? I'm curious because I was thinking that while unifying even dedicated servers may help, we'd still be scratching our heads to bending the current underlying technology to meet a very very high player count. It'll be amazing and enjoyable still if we can eventually player with 100 players or a bit more but just thought I'd include this bit also. I REALLY hope we get to see an AtV segment of the networking steps that would have been made for 3.0 when that time comes!

    Networking sure is difficult! But it sounds really intriguing!

    https://www.researchgate.net/profile/Antonio_Napoli/publication/271470858_FEC_Overhead_and_Fiber_Nonlinearity_Mitigation_Performance_and_Power_Consumption_Tradeoffs/links/5632bb8b08ae911fcd491718.pdf

    Hi Kalrati

    The paper you linked deals with networking at a much lower level than we have to deal with. Networking on the internet is often talked about in 5 layers:

    Application
    Transport
    Network
    Link
    Physical

    The physical layer at the bottom details the actual real-world stuff that carries data over part of a network (wires and electricity, radio waves or fiber optic cables and laser light) and how to encode binary ones and zeros as physical quantities, e.g. voltage levels, different radio frequencies, etc. The link layer is a protocol for getting a packet of data from one node in the network to another. Link layer protocols are designed for particular physical layers (e.g. Ethernet protocol for Ethernet cables and 802.11 for wifi) and use techniques like Forward Error Correction (FEC) to detect and, if possible, correct errors introduced by interference affecting the physical layer. This is where the paper you linked fits in.

    We work right at the top in the application layer and have our own custom network protocol that sits on top of the protocols in the other layers. HTTP, FTP, SMTP, POP3, IMAP and even telnet are all application layer protocols that most people will have heard of. Below that is the transport layer (TCP, UDP) which deals with ports (so computers can have more than one connection at a time), packet sequencing, reliability and congestion control. Because we use UDP for client/server we have to deal with those last three in our application protocol. Right in the middle we have the network layer which deals with identifying network devices by a unique address and directing packets so they get to where they need to go. IP is the main protocol here but it only works because it is supported by others including ICMP, DHCP and BGP.

    Most of the packet loss we have to worry about occurs in the network layer rather than all the way down in the physical/link layers. What will happen is that as a packet crosses a gateway from one network to another (the inter part of inter-networking) the gateway may find that there currently isn't enough bandwidth on the other side to allow it to send the packet on its merry way. The gateway has to decide whether to put the packet in a queue and try to send it later (the cause of most latency on the internet) or to drop the packet entirely (packet loss). So you are correct that there is a relationship between bandwidth and loss. If we see latency or packet loss go up then quite often we can do something about it by sending less data until we reach a level the gateway can handle, at which point everything should return to normal. This adjustment should be done continuously by the servers and is one of the areas in the netcode where we need to make improvements. Of course our packets won't be the only ones arriving at a gateway on the public internet so sometimes one will get backed up, say because of a new viral video, and there won't be a lot we can do about it other than to avoid making the situation worse.

    I doubt bandwidth will limit how far we can scale the multi-server architecture, partly because the machines our servers run on are designed specifically for serving lots of data and have enormous amounts of bandwidth. But also, the beauty of running on cloud-hosted servers is that we can always spin up another one and spread the load. I think the ultimate limiting factors will be, how small is the smallest volume we are willing to split the simulation for and how many clients we can fit in one of those volumes without sacrificing too much performance.
  • BParry_CIG

    Developer

    Posted:
    Posted:
    [hide]

    Hi Ben. Can you please clarify what you mean by “the engine is 100% D3D 11 right now”? I believe transitioning to Vulkan has been worked on for years? Will SQ42 not have Vulkan support at release? When do you expect to have full transition to Vulkan? What sort of visual improvements can we expect once the transition is done? Thanks.

    Hi @RoyFokker,
    I meant that in the sense that there's nothing you're seeing that's not come to you through D3D 11 - you can't mix and match APIs piece by piece, so if a feature looks a little different after one of our patches, one shouldn't think "oh, they must have moved that bit to a different API", it'll just be that we've improved it (or broken it, I guess) some other way.
    The ongoing conversion work is a little hard to explain, but I'll give it a shot anyway. Bear in mind that I'm not the guy doing it...
    Adding in support for a new API could theoretically be a somewhat straightforward job. Since CryEngine has had to support a ton of different ones in its history - D3D9, OpenGL, various console flavours - most of our code talks to wrappers of varying thickness to avoid having to go back and rewrite everything whenever a new one comes along. There are always a few oddities, Lumberyard for instance has added some mysterious (to me) conditional code to deal with how memory works on some mobile platforms, and there's bits left over from CryEngine supporting the 360, but let's ignore them.
    What happens, though, if you naïvely port from a higher-level API to a lower level one, as in the case of D3D11 to D3D12 or Vulkan, is that you strip away a layer of driver complexity, and then writing almost the same stuff yourself in the wrapper layer because the code outside the wrapper assumed the conveniences and limitations of what was there before.
    That might seem kind of abstract, so I'll give a concrete example. Under D3D11, when you want a texture loaded on the GPU, you just hand the driver some memory and it gives you back a handle. If you load more than will fit into GPU memory, the driver quietly works out what's best to swap out into normal memory and you don't even know it happened, besides the performance cost. With the new APIs, you're responsible for reserving the right sizes of memory, resizing and swapping where appropriate, etc. Theoretically that's far more efficient, you know what you want where and less stuff gets moved around, but if your outside-wrapper code is still trying to add and remove things without warning, your wrapper just ends up having to handle every possible case anyway, only now you don't have the advantage of a decade of fine tuning by your hardware vendor, so a lot of early Mantle/D3D12 ports actually turned out to run slower rather than faster.
    So a lot of the work in this conversion is about restructuring how the engine approaches render work. As one example, standardising how jobs ask for temporary memory gives us a pretty good saving even now, but also means that the engine understands the concept of allocating things from a shared pool, and so is able to ask for the right amount once we change API.
    In another example, the new APIs would let us offload the work of creating command lists for the GPU onto multiple threads, but the engine previously had to do it single-threaded, so tasks were written in a way where one can depend on things that were done by the one before it. One of the sweeping changes that came through my code last year was that each piece of rendering work is now placed in a packet that can't see the others, in preparation for the work of filling and executing those packets being moved off-thread.

    As you might guess from these examples, the visual improvements we're expecting from all this things like being able to push more objects for less CPU time, saving (and then spending) lots of memory, that kind of thing. It's a hard topic to talk about because there isn't really a big unique thing you can point to in the game and say "look at that magnificent Vulkan-exclusive rendering feature!", the way you could with eg. D3D11 and tessellation, but it's more that our art teams are always going to be pushing at the limits of what we can make run fast, and this should give us the ability to move those limits further up.
    Programmer - Graphics Team
  • CanguroGuro

    Posts: 155

    Posted:
    Posted:
    [hide]


    [...]
    As you might guess from these examples, the visual improvements we're expecting from all this things like being able to push more objects for less CPU time, saving (and then spending) lots of memory, that kind of thing. It's a hard topic to talk about because there isn't really a big unique thing you can point to in the game and say "look at that magnificent Vulkan-exclusive rendering feature!", the way you could with eg. D3D11 and tessellation, but it's more that our art teams are always going to be pushing at the limits of what we can make run fast, and this should give us the ability to move those limits further up.

    That was a great explanation Ben, thank you!
    May I ask how far are we from a fully functional Vulkan implementation? Not asking for a delivery date of course, just an overall idea of how much work is left to do, and maybe what's the priority on this... thinking that it may be needed for SQ42...?
  • kaonashi

    Posts: 87

    Posted:
    Posted:
    One question:

    Is it possible to establish an update process that doesn't throw you out of a game once an update occurs? The current behaviour is not acceptable when you are directly in a game and the connection breaks up because a patch is available.

    Other game clients such as Steam or GOG Galaxy do this already, they provide a patch for a game but they do not force you to install an update immediately. CIG should learn from this.

    Regards
  • Zyliot

    Posts: 1

    Posted:
    Edited: by Zyliot
    Posted:
    Edited:
    I own a Alienware computer. Is there any plan to add AlienwareFX support or like-wise support for other computer/peripherals manufacturers that do lighting profiles? I think this would be pretty badass. Thanks for your time everybody.
  • Dictator

    Posts: 232

    Posted:
    Edited: by Dictator
    Posted:
    Edited:
    @BParry_CIG and @ABrown_CIG
    [hide]

    Keep em coming!

    Famous last words!
    [hide]

    Seconding the impressed-ness with the summary. I'm meant to be writing something like that up before we make changes... I might just use this.

    I am flattered. Very flattered.
    [hide]

    While the exact unification plan isn't nailed down, our rough view is that any system we have needs an answer for when it's inside the Froxel Fog's range, and an answer for when it's outside. We basically end up with three categories:
    1) In some cases, the answer will be that it's small enough to disappear once it's beyond the range, this is the way standard LY deals with fog volumes for instance.
    2) In other cases where Froxel Fog would work better, or another system that uses it may be present (eg will fog volumes exist at cloud altitude?) , we'd like the long distance solution to be able to export its own data into the froxel buffers, so that there's no discontinuity where the two solutions switch over.
    3) Finally there may be cases where the "distant" solution looks as good or better than Froxel Fog, and in that case the systems don't need to interact at all.

    Since planetary clouds are going to be visible at very long distances, we know they can't be solved only with Froxel Fog, so they'll fall into either category 2 or 3.

    I see. How does this division between froxel vs. world space solution for volumetrics affect such large game objects like gas giants or eventually, stars? I have no idea if such stellar objects will be rendered by extensions of the planet tech (the original Eric Bruton paper did not cover anything other than terestrial atmospheric planets IRRC). Stars and Gas giants exhibit similar atmospheric behaviour on the macro level which are of course covered by planet tech (mie scattering, ryleigh scattering, etc.) yet on the meso and micro level (relative of course, as we are talking about hundreds if not tens of thousands of square kilometers) you can see surface-wide cloud-like atmospheric formations on the surface of stars and gas giants. It is honestly a fascinating question that I am curious as to how you guys will solve it eventually. I know Crusader atm is done by a scrolling POM texture, but when you can fly closer and even into that upper stratosphere... Or you have a place like hurston which is situated in the upper terraformed atmosphere of a gas giant? Cloud city? Hrm...
    [hide]

    I may have posted the video of Heitz's LTC paper to Ali's Facebook page under the heading "ALI. ALI LOOK" the moment I heard about it. As it happens though, we're not using it - while it's miles ahead of the pack in performance/quality tradeoff, it's still a huge cost (approx 1ms per fullscreen light on a GTX980) and we would then have no way to downscale on lower-spec machines given how different the scenes would look with them disabled.
    Instead, the diffuse component is closely based on Sébastien Lagarde's work for planar lights in Frostbite engine, and the specular component is somewhat based on the version we got from CryEngine, the reworking mostly consisted of taking it apart to find out which approximations led to artefacts, replacing them with alternatives, and exhaustively testing them for new issues.
    One thing that did come out of this rework, however, is that the planar lights now live in their own rendering pass. Given the time, it would be interesting to see if we could add some kind of "ultra mode" that replaces this pass with a version that uses LTC lights.

    I am always a fan of experimental ultra-modes! In fact, I adore them (one of my favourite parts of the original crysis was unlocking unused cvars and testing to see how they worked even though they crippled performance).

    Although it must be said as a warning: there is deifnitely a tendency for there to be massive overreactions in player communities and in the 'gaming press' as people crank everything up to the nominal 'ultra' and fail to register that some ultra settings are there for future proofing and giving a nice bonus to future GPU users even though they can have a relatively small visual return. I guess the best way to do it is label the settings well, provide proper warnings. Perhaps like some newer PC versions of games, provide detailed breakdowns of which settings affect which bottlenecking area the most (VRAM usage, CPU or GPU intense, etc.) Gears of War 4 did this really well. Either that or you could just hide the controlling console variable and make it so that power users who already comprehend such things can edit and enable experimental crushing ghraphical settings via autoexecs and cfgs.
    [hide]

    Specular occlusion and a better SSDO term are definitely on our radar. Unfortunately shadow maps from large area lights, as you mention, are a huge open research topic, so we're focusing on improving softer screen space techniques to take up the slack.

    SSDO looking better is always nice. I presume you also mean things like screen space contact shadows and the like? I recently saw them used for the sun in Kingdom Come Deliverence, The Division, and Gears of War 4 and thought it looked rather fantastic. I guess the question would be then, how and if it is possible to extend such screen space contact shadows to work for non-sun lights indoors. I can imagine having POM self shadowing indoors, or crisp screen space shadows on a character face indoors for example would do wonders given the heavy prevelance of POM on all the game's assets and generally how high poly EVERYTHING is.
    ------

  • Dictator

    Posts: 232

    Posted:
    Edited: by Dictator
    Posted:
    Edited:
    Who would have thought there is a post character limit?!

    Appropos area lighting and such:
    I have no idea how far you guys are in the development or what system exactly you guys are using for GI approximations in game (IIRC you guys were just recently moving the cube map's to being updated at run time (real time?) on the GPU vs the CPU as in default Lumberyard) hence I am not sure how useful this presentation is, but I found it a rather fascinating extension to the irradiance probe work seen in games like Far Cry 3 or Decima Engine games. The indirect shadowing is also ultra impressive looking in comparison to the default implementation which completely lacks any indirect obscurance.

    developer.download.nvidia.com/assets/gameworks/downloads/regular/GDC17/McGuire2017LightField-GDCSlides.pdf?autho=1491155870_7aec51bdd488700aeed3b75f081e48b7&file=McGuire2017LightField-GDCSlides.pdf

    The lightfield stuff after slide 31 is also really cool for giving a "viable" way to have nice glossy reflections and area light shadows, etc. beyond screen space... but at the same time that is 10ms on a very modern GPU @ 1080p :X To be fair though, it is covering a lot of cases and capturing a lot of phenomena and edge cases that the eye just kind of 'expects' to be there that are costly anyway. Perhaps via the afforementioned "experimental setting" in a hidden advanced menu? :P
    [hide]

    We have two solutions to this! First, as you say, there's depth-fixup. We're already using that on hair, and (I think) on particles. In general though, the trick is to do depth fixup only on the parts that are opaque enough that they dominate the image, which is less easy to do for things like glass and holograms.
    The second solution is that we simply sort objects into two lists based on whether they're beyond a certain (situationally-varying) distance. Things beyond that distance are drawn before the DOF and motion blur calculations, things nearer than that plane are left blur-free. While this clearly isn't a perfect solution, nearby transparencies tend to be more problematic when they blur with what's behind them, and more distant transparencies are problematic if they remain crisp when objects around them are blurred. This second solution is already working at the render end, but we still need to hook it up to the systems that will control it.

    ooo, I guess that is how you would make stuff like moby glass work within DOF?

    I know this is a bit tertiary, but speaking of depth of field I am really reminded of the great presentation from Silicon Studio about the subtle effects they added to their engine's depth of field implementation:
    https://siliconstudio.co.jp/rd/presentations/files/siggraph2015/08_SubtleAnamorphicLensEffects_S2015_Kawase_EN.pdf

    Some of the described effects are very very subtle, but I think some definitely are quite desirable in a sci-fi game or just a game in general. The anamorphic stretching of bokeh shapes with the light barrel distortion and such - that looks VERY scifi IMO. The paper also points out a very important point about chromatic aberation and colour fringing. PC player's in general really hate chromatic aberration (which I understand to be honest), but I think that is mainly because it implemented usually in a physically incorrect manner that affects the whole image regardless of focal length and aperture. Currently in Lumberyard and nearly every game out there where Chromatic Aberation is just done as a full screen effect distorting colour more and more to the screen edge. While in reality, a camera as far as I understand it, only really should have chromatic aberation affect the regions of the image which are in fact out-of-focus.

    For example look at this image I just took from the Star Trek TNG Blu Ray:
    vlcsnap-2015-03-24-11ayu4o.png

    Chromatic abberation is not present in the focused regions in the above image (the foreground character). Rather it is only visble in the out-of-focus areas (the yellowed fringing within white bokeh shapes). The image also shows how nice some of those other subtle anamorphic effects can be. Like how the barrel distortion visible in the bokeh shapes frame the in-focus region of the image (here the empath/telepath in the foreground). The bokeh shapes kind of form a ring around him, bringing him even more into focus than they would be if they were merely "flat" and perfectly rounded.
    ----

    Well, I sure wrote a lot! Perhaps there is some sense in there... Thx for answering any questions in there and for taking the time to read my musings!
    Best,
  • VesperTV

    Posts: 8

    Posted:
    Posted:
    Hello, I have 2 question for you guys:

    1: What is being done about the collision in the game ?
    {
    It's so easy to glitch out to a wall at the moment, like, you just walk into one of those "toilet door", and 2sec later you're in EVA outside of Port Olistar.

    Or you enter a ship, say, Aegis Avenger Titan Renegade, and you have 30% chance of glitching outside of the ship when trying to enter the buffer zone before pilot seat.
    }

    2: Are you planning on adding default mapping for your french users ?
    {
    AZERTY keyboard mapping is totally different than QWERTY, when you save settings, you use the "physical" location of the key on the keyboard rather than the actual key. (example, it gave me trouble to realize that to accept a party invitation with "[", I actually had to press "^" key.
    Ie: https://upload.wikimedia.org/wikipedia/commons/b/b9/KB_France.svg (belgium AZERTY follow QWERTY mapping better)
    }

    Keep up the good work!
  • Commander_zx7

    Posts: 152

    Posted:
    Posted:
    Sorry if this has been asked before but, How to does the Engine deal with LOD's? I have noticed that the LOD's will
    sometimes jump or flash from one to other that is often times pretty noticeable. Is this just a LOD distance balance thing or something else that can cause such an effect?
  • ABrown_CIG

    Staff

    Posted:
    Posted:
    [hide]

    Hi there everyone at CIG i hope you are all well. Thanks for the great work! love what you are doing.

    My question is regarding VRam usage.

    My current setup is a gtx 1080 and i run at 3440x1440.

    I find that i am usually pinned at 7.8-9gb of VRam usage. I did try lowering my resolution to 1920x1080 and found my VRam use was the same. I find that this causes some strange texture issues such as bennys machines rendering in very blurred and unrecognizable (think 240p instead of 4k) I suspect the remaining 0.1-2 VRam is possibly system reserved.

    Could this considered to be a memory leak or is it just the large amount of textures we are loading into VRam? Watching several of the regular shows posted by CIG i know things are getting optimized on a daily basis but would it be worth considering a gpu with more VRam?

    Any input would be highly welcome.

    Regards

    FloppyPoppy

    Hi FloppyPoppy,

    I'm currently re-working the way we allocate VRAM for screen-sized textures so that we can re-use memory more easily, because these textures are one of the major contributors to VRAM usage, especially when running in higher resolutions. So you can hopefully expect some savings here on the next release.

    After we've allocated the required VRAM for the base engine we then give much of the remainder to the texture streaming system, because the more textures we can keep in memory then the less likely you are to need to stream a lot of textures off-disk quickly which can introduce cause visual popping and increased disk IO.

    If you've seen blurry textures then this is almost certainly a bug in the code that determines which resolution texture to stream and not a problem with the amount of VRAM you have, therefore I wouldn't recommend more VRAM in an attempt solve this issue.

    For now our optimizations are primarily for more 'typical' resolutions, but further down the line we do plan optimizations that are specifically tailored towards multi-monitor / 4k, and at this point we'll have a better idea of our ideal VRAM requirements for a silky smooth experience on such setups.
    [hide]

    Hello again dear Dev team! :)

    As technology advances, the code may have to adapt to it to use it (efficient). There have been some talks about Ryzen optimization (Ashes of the Singularity), some Vulkan talks from different developers (DOTA 2, Doom etc) and of course we also have a lot of graphic vendor specific magic going on (AMD, Nvidia) while also getting new software technologies out of the "applied science" zone (one of my favs is this channel: Two Minute Papers)

    How do you guys stay on track with all of that? Do you have someone looking out for it, do the Devs find it out themselves or does it come from the producers who say "There is this new thing, I want that"? I'd imagine, you can have a person working full-time just to be on "update" with everything going on, but I guess such a person does not exist. So... how do you do it?

    Cheers!

    Hi Valdore,

    There's no one guy who keeps track of all these nuggets of information on how to optimize for each platform, CPU, GPU, API etc. Instead we each just try to keep up with the latest presentations, blogs & tweets, and after filtering out the noise share any relevant knowledge between team members. In reality a lot of this information isn't that relevant in day-to-day working and it's more when you come to write or optimize a specific bit of code that you look at the latest recommendations from the relevant hardware/software vendors as it's near impossible to keep up with the latest guidelines across every topic.

    Cheers,

    Ali Brown - Director of Graphics Engineering
  • Daedroth

    Posts: 16206

    Posted:
    Edited: by Daedroth
    Posted:
    Edited:
    @ABrown_CIG:
    [hide]

    and eventually drop support for DX11 as this shouldn't effect any of our backers.

    I'd still like to know: How did you come to that conclusion? What about backers with graphics cards like the Geforce 560 Ti, that satisfies Star Citizen's minimum requirements (and runs the game well enough), but that also will not support Vulkan? Does it not "effect" someone if they have to replace their graphics card in order to keep playing this alpha?

    For reference, the current minimum requirements from the Dowload page:

    Windows 7 (64 bit) – Service Pack 1, Windows 8 (64 bit), Windows 10 – Anniversary Update (64 bit)
    DirectX 11 graphics card with 1GB Video RAM
    Quad core CPU
    8GB Memory

    -_-
  • CheeseNorris

    Posts: 1123

    Posted:
    Edited: by CheeseNorris
    Posted:
    Edited:
    Just a question, I bought the Tobii eyetracking for ED and love how it works there. Seeing that TrackIR has been implemented into SC, will TobiiX get a shot for SC too?? It would really be great. Its not accurate enough to use weapons with anyway, but it really helps if you are helming a ship. A large one too. Looking around with just your eyes, with headtracking already programmed into it, its 2 in 1.

    Edit: - Please ping me upon reply, may forget about this
  • Dowlphin

    Posts: 1597

    Posted:
    Edited: by Dowlphin
    Posted:
    Edited:
    (reposted in Spectrum in case this gets closed before an answer)

    Why does Star Citizen reserve roughly twice as much RAM as is actually needed at any time?
    I got 16 GB RAM and that was always sufficient with any game without having a pagefile. Star Citizen seems to be an exception. What's the idea behind commit size being generally twice as large as working set? As a layman that 'formula' seems a bit unrealistic to me.
    E.g. when working set is 3 GB, commit is 6 GB. When working set is 5 GB (apparently roughly the current maximum needed), commit is 10 GB.
    (I did enable pagefile now, understanding that apparently Win7 is smart enough to only use it once physical RAM is full. But it still feels odd to me to enable it just to guarantee Star Citizen the availablity of memory it will never need.)
    Life artist | guide & seeker | student & teacher (a preacher with less PR and more tea) | pantheist & puntheist, pontheist & pon-3ist
    http://dowlphin.de
    | Sabre, Constellation Phoenix, M50, Mustang Gamma
  • HBZK

    Posts: 129

    Posted:
    Posted:
    Hey guys, are there any plans to introduce Vulkan into Patch 3.0? Or is that too quick?
    PC Hardware Enthusiast. Moderator for Tom's Hardware. http://www.tomshardware.com/community/profile-1695593.htm

    My Build: http://pcpartpicker.com/b/tzzMnQ
Sign In or Register to comment.