Tutorials & Guides / A guide to texture optimisation

Author
Message
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 6th Feb 2021 10:24 Edited at: 6th Feb 2021 10:47
A Guide to Texture Optimisation


Optimisation, the art of squeezing as much performance as possible from your hardware, is one of the most fundamental skills in game development. Unfortunately, with Game Guru being very appealing to inexperienced developers and it being so easy to obtain high quality media for our games, it means optimisation often gets overlooked, with catastrophic results. We often see Game Guru games that run terribly because they are poorly-optimised, and many of us have experienced difficulty creating games of any length, particularly once our levels start getting more complex and detailed.

In this three part guide, we'll be looking at how to optimise one of the biggest culprits of poor performance: textures. We'll look at why optimising textures is so important, and how tweaking two simple parameters can massively improve the performance and stability of your games. Sit tight, grab a cup of tea, this is a long one...but so, so worth it!

Part 1: The cost of textures
Textures are the single biggest drain on your graphics card during a game. When your game loads, your models and textures are passed to your graphics card to store and display. Your graphics card has a finite amount of GPU memory (called VRAM) available - these days most consumer cards have between 2GB and 8GB of VRAM - and every pixel and vertex you send to it has a cost in memory terms. To work out the cost of a texture, we need to know 1) its dimensions, and 2) its bit-depth. Let's take this texture as an example:


An example of a 1024 x 1024 texture


The first part is easy; any program will tell you it's 1024 pixels wide by 1024 pixels high. In total, it's made up of just over a million pixels (1024 x 1024 = 1,048,576 pixels).


Each pixel is made up of a red channel, green channel, and blue channel (and sometimes an alpha channel)


Now, bit-depth. Each of those pixels has a colour which is made up three channels, red, green, and blue, and each channel has a value between 0 – 255. A pixel with the colour R42,G102,B128 has a little red in it, and a lot more green and especially blue. Rather than say there are 256 possible values of each channel, we instead express this as 2^8 (2x2x2x2x2x2x2x2 = 256), and call this 8-bit colour.

Since all three channels have values somewhere between 0 and 255, we're actually dealing with three lots of 2^8. Bit-depth is the number of bits per channel x the number of channels. We have three channels here (red, green, blue), so our bit-depth is 8 x 3 = 24.

Once you know your dimensions and bit-depth, you can work out how much VRAM it will take to display your texture.

1024 x 1024 = 1,048,576 total pixels
1,048,576 x 24 bits in each pixel = 25,165,824 total bits in the image
25,165,824 bits / 8 bits in every byte = 3,145,728 bytes
3,145,728 bytes / 1024 = 3,072kb
3,072Kb / 1024 = 3MB

So, our 1024 x 1024 texture uses exactly 3MB of the VRAM on offer. It doesn't sound a lot, but it can soon add up. If your model takes advantage of Game Guru's PBR workflow, it may have six texture maps (_color, _normal, _metalness, _gloss, _ao, _illumination). If each is 1024 x 1024, that's 18MB just on textures for one model! The problem is only multiplied with higher resolutions. Six 2048 x 2048 textures will take up 72MB. And saving with an alpha channel (alpha channels control transparency) adds another 8-bit channel to factor in, giving a bit-depth of 32, and adding 25% to the GPU requirements of your texture (in reality, the only texture you should ever save with an alpha channel is the _color map, it's all wasted information if used on the other maps).

When it comes to adding unique assets to our levels, we're spoilt for choice. Between the numerous DLC packs, the TGC Store, and other third party sites like Turbosquid and CGTrader, it's pretty easy to get hold of high quality assets for your game. Artists will often provide textures in the highest quality because they a) want to present their work in the best quality, and b) they don't know how or where their work will be used. The problem is that many inexperienced developers will take these high resolution, high quality textures and use them as-is, without any thought for optimisation. If they start using 4K (4096 x 4096) textures on every asset, it won't be long before they run out of VRAM, at best causing a drop in framerate but at worst causing full game-ending crashes. Since the artist can't know where their asset will be used, the onus is on the developer to optimise the textures they are provided with.


Your graphics card, struggling under the weight of all those 4K textures


Myth #1: But my texture is a jpg, and only takes up 6kB of disk space.
The filesize of your texture has no bearing on how much VRAM it takes up. It could be a 6kB jpg or 3MB bmp, it will always take up the same mount of VRAM, although – as we shall see later – file format is still a factor in optimisation but for different reasons.

Myth #2: But my object is offscreen, doesn't that free up resources?
Sadly, no. Game Guru is very traditional in its approach to loading textures; every texture used by every asset in a level is loaded to the GPU during the loading of your level. In contrast, many modern games support texture streaming, where textures are loaded and unloaded as needed on the fly. I found a great concise explanation of texture streaming on reddit:


An explanation of texture streaming


Because Game Guru doesn't support texture streaming, it doesn't matter whether your texture is onscreen or not, or even in the same room; if it's in the level, it's loaded and uses up VRAM.

Myth #3: My game is slow because there's too many polygons onscreen.
Unlikely, as 3D meshes are infinitely cheaper than textures. A textured 3D model (static, not animated) costs 48 bytes per polygon. A single 3MB 1024 x 1024 texture takes up as much VRAM as a 65K polygon model, and most models in Game Guru are under 10,000 polygons anyway.

With this in mind, it's clear that every Game Guru developer should be looking at ways to minimise the cost of their textures on the GPU. There's two parameters you can adjust to do this and to get the best results you'll need to look at both. The first is relatively straightforward; resolution.
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 6th Feb 2021 10:26 Edited at: 6th Feb 2021 10:43
Part 2: Resolution
If you halve each side of your texture, you shrink the area of your texture (and the amount of VRAM it requires) by 75%. Take our 1024 x 1024 texture as an example; we already worked out it needs 3MB of VRAM. But let's halve each side, reducing it to 512 x 512.

512 x 512 x 24 = 6,291,456 bits
6,291,456/8 = 786,432 bytes or 768kB

It's clear that resizing our textures can save considerable memory, but the question is; how small should we go? That depends on many factors, but one way to look at it is in terms of screen space; how many pixels will that texture actually take up on a player's screen in-game?


How high res does this milk carton really need to be?


Here I have a simple milk carton entity. It's pretty small and because it's on a table, I know the player will only ever be able to get so close to this entity; there's a limit to how much detail they will ever see. I need to consider how many pixels that entity will take up on a player's screen. Given that 84% of Steam users still run a monitor that is 1920 x 1080 or less, it seems sensible – even in 2020 – to assume I need to make my textures look acceptable for that resolution.


I used an art package to measure the physical number of pixels this carton takes up at my screen resolution


Here I've taken a screenshot on my 1920 x 1080 monitor at the closest distance a player will ever get to the milk carton. You can see the carton itself is less than 500 pixels high. Whatever texture size I use will be squashed into those 500 pixels. Therefore, players will not see any benefit in a 1024 x 1024 texture or a 2048 x 2048 texture for this entity, because it will always be squeezed down to around 500 pixels of screen space during the game.

By default, this entity comes with 512 x 512 texture, which might seem like a sensible option; it's slightly bigger than a player at 1920 x 1080 will ever need so detail will hold up well. But here's the thing; the player doesn't actually need to discern every pixel on the milk carton. So long as they can see some of the detail, their mind will fill in the blanks. The milk carton is also not that important to gameplay; it's not a pickup, most players will just run past it. There's also a lot of contrast in the texture itself, the red stands out a lot against the white. All of this means we can get away with making the texture smaller.


Effect on visuals of reducing texture size


If we reduce it to 256 x 256, the texture now costs just 192kB of VRAM and the majority of players won't notice the difference, especially if the object is far away, tucked in a corner, only partially visible, or the room is dark. We could go one step further and reduce the texture to 128 x 128, which only costs 48kB. For me this is a little too far, but if the object was less visible or took up less screen space, this might also be fine. Another practical upshot of reducing texture size is it usually results in a smaller filesize too.


Effects on filesize and VRAM requirements of reducing texture size


Make sure you go through every asset in your game and play around with the texture sizes. Keep resizing them until you reach a level of detail that is both acceptable quality but also low memory, you'll be surprised how much you can get away with. I made this switch with a 2048 x 2048 texture to begin with, but found a 512 x 512 texture was perfectly acceptable. Only at 256 x 256 did the detail start to get blurry.


I was surprised how far I could reduce this texture


Same with this table I downloaded from CGTrader. It came with beautiful 4K textures but I experimented with different sizes to see what I was prepared to accept. Anything below 1024 was too blurry, with 1024 itself being a good trade-off between detail and performance. Even if I just dropped the texture from 4096 to 2048, I'd save 36MB of VRAM, or enough to have twelve 1024 textures elsewhere in the level.


Ultimately, texture reduction is always done to personal preference


One of the few advantages of Game Guru Classic's haphazard interpretation of PBR is that each texture that makes up a material (_color, _metalness, _gloss etc.) is specified separately. This means you can tweak the size of each texture individually to save precious memory with little to no noticeable loss in overall quality. You can usually reduce the size of your _metalness and _gloss textures more than you can your _color and _normal maps.


I found very little difference in overall quality when reducing _gloss, _ao, and _metalness maps


By reducing the physical size of your textures, you reduce the resources required to display them, and therefore their impact on performance. This is the first major optimisation you can make, the second is compression, more specifically, understanding what format to save your textures in.
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 6th Feb 2021 10:27 Edited at: 6th Feb 2021 10:45
Part 3: Compression formats
Inexperienced developers may be tempted to save textures in a compressed format like jpg to save filesize, believing this increases performance. I'll say it again: the filesize of your texture has no bearing on how much VRAM it takes up.

The jpg format itself is also a lossy format, meaning the way it saves filesize is to throw away data from the image. It does this by reducing the number of colours in your image, and grouping them together. You can never get this data back after saving, it's gone forever – by saving in jpg, you permanently degrade your image. Never use it for textures.

Other developers may be tempted by the png format as it offers superior quality whilst maintaining a relatively small filesize. It also supports alpha channels (transparency), which jpg does not. Png is a lossless format, meaning data is never thrown away, instead clever algorithms encode and decode the image data at either end. This requires extra processing but results in a smaller filesize. Png is a popular format for artists to supply textures in because it is lossless, but it's not ideal for game engines because it requires extra processing.

In truth, there is only one format you should be saving your textures in; Direct Draw-Surface, otherwise known as dds. It is the format native for directx, meaning your graphics card will convert any other image format to dds in order to display it. Jpg goes in, dds comes out. Png goes in, dds comes out. You can actually see the effects of this conversion process in action in Game Guru. Here's a comparison between an asset using png textures vs. using dds. Despite dds being a lossy format like jpg, the details appear sharper when using dds because your graphics card isn't having to convert a png to dds.


Comparison of PNG vs. DDS


The tool you use to save dds files matters. Nvidia do a free DDS plugin for Photoshop which is widely considered the best tool out there in terms of the quality of the textures it produces. If you don't own Photoshop, a decent free option is Paint.net, which is what I've used here. Regardless of the tool you use, you'll quickly realise dds comes in a huge variety of different flavours and compression types, so it's vital we consider which one is best for which type of texture.

To see how each compression format affects a texture, we'll use this stress-testing texture I made.


The stress-tester texture has graduated coloured bands and high detail areas designed to find compression artefacts


The dds format is lossy, it compresses your texture by chopping it up into 4x4 blocks, finding the two most contrasting colours in the block, and estimating the colours of the other fourteen pixels in the block must all have values somewhere between the two outliers. To stress-test this, our test texture has a band of tightly-packed multicoloured pixels which are impossible to arrange on a straight line from one colour to another.


A pattern impossible to replicate with 4x4 block compression


BC1
BC1 is the oldest format and offers the most compression and least strain on the GPU. It has a bit-depth of 16 rather than 24. Instead of using 8-bit colours per channel, BC1 compresses colours using 5:6:5 compression; this means that instead of 256 possible values in each channel, there are 32 for red (2^5), 64 for green (2^6), and 32 for blue (2^5). The lack of available colours causes a noticeable 'step' from one shade to the next, this is called “banding”.


Colour banding in DDS BC1


Because there are more available shades of green than of red and blue, you'll also find BC1 textures struggle to replicate greys well; often producing green or purple tinted pixels.


Green and purple artefacts are common in many DDS formats


It also totally falls apart when dealing with complex colour blocks.


Well...I guess that means our stress-test works


BC1 does support an alpha channel, but only a 1-bit alpha channel; a pixel is either fully opaque or fully transparent, there is no way to have smooth transitions between the two. This can still be useful for instances where you need a clear separation between opaque and transparent, such as on foliage and wire fences.


1-bit alpha channels in DDS BC1


By squeezing all this information into 16-bits, BC1 offers the greatest saving in terms of VRAM at the cost of outright quality. If your texture doesn't have too much grey in it, doesn't need a complex alpha channel, and isn't too tightly-detailed, BC1 is a viable option.

512 x 512 x 16 = 4,194,304 bits
4,194,304 / 8 = 524,288 bytes or 512kB VRAM

BC4
BC4 is a grayscale format that offers the same benefits of filesize and VRAM usage as a BC1 texture but with significantly better quality. Being grayscale, it only has to store one channel of data, not three, allowing us to have the full compliment of 256 shades of grey. BC4 is ideally suited to grayscale textures like _metalness, _AO, and _gloss, but it does not support alpha channels.


BC4 is excellent for grayscale textures


BC3
BC3 is essentially a combination of BC1 and BC4. It stores RGB colours in the same 5:6:5 compression as BC1, but stores an alpha channel in full 8-bit grayscale like a BC4. BC3 is a popular choice for any texture that requires a full alpha channel in order to simulate semi-transparency, or graduations. With 16 bits for colour and 8 for alpha, it has a bit depth of 24.

512 x 512 x 24 = 6,291,456 bits
6,291,456 / 8 = 786,432 bytes or 768kB VRAM


BC3: The best of both worlds?


BC5
BC5 is sometimes recommended for tangent-space normal maps. It uses two grayscale BC4-style channels to store X and Y information, with Z being reconstructed later in the pixel shader itself. The upside of this is increased fidelity in the normal map (compared to BC1 or BC3 where you don't have the full range of 256 colours), the downside is the extra memory required to load the texture, since it is has a bit-depth of 16+16 = 32, and the additional processing required in the pixel shader.

512 x 512 x 32 = 8,388,608 bits
8,388,608 / 8 = 1,048,576 bytes or 1,024kB or 1MB VRAM


'Signed' or 'unsigned' defines which channel is reconstructed in the pixel shader


BC6 and BC7
These two newer formats are only supported by D3D11-level hardware (directX11 and above), and only became an option to us Game Guru-ers since the DX11 update in 2018. These exotic and complicated formats use sophisticated techniques to pick the best compression method for each 4x4 block instead of just applying one to the entire image. The specifics are a little too complex to go into here (plus I'm not sure I fully understand them myself!) but I will link to a fantastic blog post at the end of this guide where you can do further reading if interested (it's where I learned most of this!).


BC7 compression is designed to take advantage of DirectX11 hardware


BC6 and BC7 do better with our stress test with BC1 and BC3, but it's important to remember we're stilling dealing with a lossy format so inevitably the results won't be perfect. In most real-world scenarios, BC7 will produce the best results, almost indistinguishable from a png original. BC6 is designed for storing floating point data as used in HDR images. Since Game Guru doesn't support HDR textures, you can probably ignore this format.


Stress-test comparison


The main downside to BC6 and BC7 is the increased encoding time required to save these textures in the first place. Paint.net offers different compression speeds, with slower being more accurate but taking longer. The resulting images are full 32-bit RGBA, making them twice as costly both in terms of filesize and VRAM usage.

512 x 512 x 32 = 8,388,608 bits
8,388,608 / 8 = 1,048,576 bytes or 1,024kB or 1MB VRAM

Despite the increase in quality, surprisingly very few AAA games use more than a handful of BC7 textures. This may be partly down to their increased filesize and memory footprint, but it also may have something to do with compatibility on older consoles that don't support the format.

B8G8R8
You'll notice that none of the block compression methods so far give a 100% accurate rendition of the png stress-test image. Both the Nvidia photoshop plugin and Paint.net do feature various other uncompressed formats of dds which are less commonly used because of their large filesize. If you really want 100% accuracy, your best bet is B8G8R8, which uses full 8-bit colour channels to save an uncompressed 24-bit colour image.


B8G8R8: I don't know why the channels are swapped around and not R8G8B8


512 x 512 x 24 = 6,291,456 bits
6,291,456 / 8 = 786,432 bytes or 768kB VRAM

Whilst it smashes the competition in terms of accuracy, the lack of compression triples the filesize compared to a BC3 or BC7 image. Over the course of an entire game, this will add up. I don't recommend using this method.

But seriously though, which format do I pick?
Now that you're aware of the different compression formats available, the question is which one do you use? This is really dependent on the texture itself but, in general, I would stick to the following rules:

PBR:
_color (no complex alpha needed, and not much greys/muted colours) = BC1
_color (complex alpha needed, or lots of greys/muted colours) = BC7
_normal = BC1
_gloss = BC4
_metalness = BC4
_ao = BC4
_emissive = BC1
_illumination = BC1
_surface = BC1 (Game Guru Max only)

DNS:
_D (no complex alpha needed, and not much greys/muted colours) = BC1
_D (complex alpha needed, or lots of greys/muted colours) = BC3
_N = BC1
_S = BC4
_I = BC1

It's always a balancing exercise between texture quality versus GPU memory requirements, and this has to be decided on a case-by-case basis. But this is also why you need to optimise both resolution and compression, as it buys you more VRAM to play with for your more important assets. Saving a few megabytes on background props might allow you to spend a few more megabytes textures closer to the player.

Hopefully this in-depth look at texture optimisation will save you lots of performance and enable your games to run more smoothly on a wider variety of hardware.

Further reading (and sources):
Understanding BCn Texture Compression Formats
http://www.reedbeta.com/blog/understanding-bcn-texture-compression-formats/

Torque 3D texture compression
http://docs.garagegames.com/torque-3d/official/content/documentation/Artist%20Guide/Formats/TextureCompression.html

DDS Files and DXT Compression
https://w3dhub.com/forum/topic/417101-dds-files-and-dxt-compression/

Steam hardware survey – December 2020
https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
granada
Forum Support
22
Years of Service
User Offline
Joined: 27th Aug 2002
Location: United Kingdom
Posted: 6th Feb 2021 11:45
Great information , well explained and laid out. Thank you for taking the time to do this

Dave
Windows 10 Pro 64 bit
GeForce RTX™ 2070 GAMING OC 8G
AMD FX (tm)-9590 Eight-core Processor
31.96 GB RAM
3840 x 2160 ,60 Hz
PM
Super Clark
GameGuru TGC Backer
13
Years of Service
User Offline
Joined: 4th Apr 2011
Location: UK
Posted: 6th Feb 2021 12:05
Very informative and in depth a lot to look into now.. Thanks for the effort to do this tut Avenging Eagle...
PM
mikeven
12
Years of Service
User Offline
Joined: 31st Dec 2011
Location:
Posted: 6th Feb 2021 12:41
Thank you Avenging Eagle for your superb tutorial.

If you consider that my question is not out of topic, I would like to know your opinion about the textures mapping the terrain in Game Guru Max. Is it a good choice for the performance of the engine ?



PM
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 6th Feb 2021 17:25 Edited at: 6th Feb 2021 17:26
Hi mikeven, I'm afraid I haven't bought into Game Guru Max yet so I haven't seen how the terrain textures work. If it's the same as classic, the _D map uses an alpha channel to control roughness. This does seem like an efficient method. Other assets in Max use the _surface texture, where the red channel = ambient occlusion, green = roughness, and blue = metalness. Since terrain almost never needs an AO or a metalness map, a _surface texture would be wasted on terrain, so using the roughness map in the alpha channel of _D seems like a efficient use of resources. DDS BC3 would be the most efficient format to save a terrain texture in, though BC7 would offer the best visual quality.

AE
mikeven
12
Years of Service
User Offline
Joined: 31st Dec 2011
Location:
Posted: 6th Feb 2021 18:32
Hi Avenging Eagle.
Thank you for having taken the time to answer so fast to my question.

The creation of digital artworks requires more and more technical knowledge.
This is demonstrated by Dghelneshi in her tutorial about DDS files and DXT Compression (the link provided in your tutorial : https://w3dhub.com/forum/topic/417101-dds-files-and-dxt-compression/ ).









PM
Earthling45
8
Years of Service
User Offline
Joined: 3rd Sep 2016
Location: Zuid Holland Nederland
Posted: 6th Feb 2021 21:50
Quote: "Myth #3: My game is slow because there's too many polygons onscreen.
Unlikely, as 3D meshes are infinitely cheaper than textures. A textured 3D model (static, not animated) costs 48 bytes per polygon. A single 3MB 1024 x 1024 texture takes up as much VRAM as a 65K polygon model, and most models in Game Guru are under 10,000 polygons anyway."


Thanks for this thorough tutorial AE.

I've quoted a part from your tutorial because i do have a question about that part.
It would seem that if indeed polygons are cheaper, actual displacement (more dense mesh for added geometry) would be faster then parallax occlusion which means multiple passes of a texture.
In terms of memory, i think it certainly is cheaper to have more polygons, but rendering them seems to me to be an extra cost in terms of performance and more so than those extra passes.

At least this is how i have picked it up though i might be wrong.
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 7th Feb 2021 10:36
You're probably right Earthing45, otherwise why would games bother with displacement mapping? Unfortunately, I have no data on the relative performance hit of displacement mapping versus doing it all "for real" using geometry.

AE
Earthling45
8
Years of Service
User Offline
Joined: 3rd Sep 2016
Location: Zuid Holland Nederland
Posted: 7th Feb 2021 19:37 Edited at: 7th Feb 2021 19:37
Maybe due to loadtimes because a low poly model is going to load and render much faster and with all the detail through textures we have lod through mipmaps i think.

Sadly mipmaps in GG are only 5 steps i believe due to those lines becoming visible on terrain and models with tiled textures.

This video gives a good comparison i think of several techniques.

AmenMoses
GameGuru Master
8
Years of Service
User Offline
Joined: 20th Feb 2016
Location: Portsmouth, England
Posted: 7th Feb 2021 21:28
There is a certain amount of processing "per poly" that cannot be avoided and is relatively constant regardless of the size of the polygon.
(things like mapping that particular polygon to a part of a texture map) (tF)

There is also an additional amount of processing which varies on the size of that polygon (how much of a texture map it covers for example). (tA)

So if you have 100 polygons you get 100 * tF + 100 * tA

If you now have another bit of extra processing to add 'fake' depth (based on normal maps for example) it will add a little bit to tA but not make tF any longer.

So it is all down to that fixed overhead per poly, if that is substantial then adding more polys may make it more expensive overall than doing the 'fake' depth processing. (and it's not really an 'if' because we know that is the case )
Been there, done that, got all the T-Shirts!
PM
3com
10
Years of Service
User Offline
Joined: 18th May 2014
Location: Catalonia
Posted: 8th Feb 2021 02:07 Edited at: 8th Feb 2021 03:27
Very useful info here, detailed and well explained. Avenging Eagle thanks for the tuto.

There might be some other factor on the equations to take into account as well, the texel density is well explained on those 2 links below.

here and here.

Is there some texel density tools such as Texel Density Advisor for Maya.

texel density tool for 3ds max

Understanding Texel Density

Just want to add something that maybe has nothing to do, but it worth knowing. imho.

Edit: fixing links.
Laptop: Lenovo - Intel(R) Celeron(R) CPU 1005M @ 1.90GHz

OS: Windows 10 (64) - Ram: 4 gb - Hd: 283 gb - Video card: Intel(R) HD Graphics
cpu mark: 10396.6
2d graphics mark: 947.9
3d graphics mark: 8310.9
memory mark 2584.8
Disk mark: 1146.3
Passmark rating: 3662.4

PM
Ertlov
GameGuru BOTB Developer
17
Years of Service
User Offline
Joined: 18th Jan 2007
Location: Australia
Posted: 8th Feb 2021 08:32
Any chance for a tutorial part on multi materials per object?
"I am a road map, I will lead and you will follow, I will teach and you will learn, when you leave my sprint planning you will be weapons, focused and full of JIRA tickets, Hot Rod rocket development gods of precision and strength, terrorizing across the repository and hunting for github submits."
Bored of the Rings
GameGuru Master
19
Years of Service
User Offline
Joined: 25th Feb 2005
Location: Middle Earth
Posted: 8th Feb 2021 11:15
@AE-thanks for going out of your way to produce these tuts. Will have a good read later.
Professional Programmer: Languages- SAS (Statistical Analysis Software) , C++ VS2019, SQL, PL-SQL, JavaScript, HTML, Three.js, others
Hardware: ULTRA FAST Quad Core Gaming PC Tower WIFI & 16GB 1TB HDD & Win 10 (x64), Geforce GTX1060(3GB). Dell Mixed Reality VR headset, Aerodrums 3D
Tauren
9
Years of Service
User Offline
Joined: 25th Jun 2015
Playing: PUBG,Conan Exiles,WoW,HoMM III,MoO 2,Master of Orion 2016
Posted: 9th Feb 2021 07:56 Edited at: 9th Feb 2021 08:03
Avenging Eagle, thank you so much for this guide !
Written very clearly, understandably and with illustrative examples, I learned more about dds formats, before reading I used only BC5 method, now I can save some more memory.
Thanks for your articles, explanations and examples. You are the best community I have met.
PM
Belidos
3D Media Maker
9
Years of Service
User Offline
Joined: 23rd Nov 2015
Playing: The Game
Posted: 9th Feb 2021 08:26
For the _surface texture, you suggest BC1, however you menton earlier that BC1 is good for "no complex alpha needed, and not much greys/muted colours", but a _surface texture is three channels of greyscale; AO, Roughness, and Metallic, all of which are entirely grey scale, so surely BC1 is not good for those? Wouldn't BC7 be better?
Primary Desktop:
i7 7700,k NV1070 8GB, 16GB 3200mhz memory, 1x 2TB Hybrid, Win10.

Secondary Desktop:
i5 4760k, NV960 2GB, 16GB 2333mhz memory, 1x 2TB Hybrid, Win10.

Primary Laptop:
i5, NV1050 4GB, 8GB memory, 1x 1TB HDD, Win10.

Secondary Laptop:
i3, Intel 4000 series graphics, 6GB memory, 1x 500gb HDD, Win8.1.


Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 9th Feb 2021 13:56 Edited at: 9th Feb 2021 14:34
@Belidos

BC1 is not very good at replicating grey colours because greys require an equal split of red, green, and blue channels. Since the red and blue channels have half as many possible shades available to them as the green channel, you end up with green or purple tinted greys (a RGB value like 45,43,45 is not uncommon).

Now consider the _surface texture. Each channel is used independently by the shader. The shader only uses the red channel for AO, only the green channel for roughness, and only the blue channel for metalness. It's not necessary to combine the values of the red, green and blue channels with a surface texture. So then the question becomes, why that order? Why not red for roughness, or green for metalness? You're an artist, you'll probably have experienced first-hand that when making a metalness map, it's often either black or white, and an AO map is usually mostly white with a little bit of grey. You can quite convincingly compress both of these from 256 colours per channel to the 32 colours per channel of a BC1. Roughness/Gloss on the other hand has more nuance, lots of areas of grey in between black and white; BC1 give the green channel 64 possible values. OK it's not perfect, certainly not as accurate as full 256 colours, but it's good enough.

BC7 is also 32-bit, it has an alpha channel. Surface textures currently don't make use of an alpha channel for that's wasted information IMO.

AE
Belidos
3D Media Maker
9
Years of Service
User Offline
Joined: 23rd Nov 2015
Playing: The Game
Posted: 9th Feb 2021 14:22
Fair enough, just thought i would check.
Primary Desktop:
i7 7700,k NV1070 8GB, 16GB 3200mhz memory, 1x 2TB Hybrid, Win10.

Secondary Desktop:
i5 4760k, NV960 2GB, 16GB 2333mhz memory, 1x 2TB Hybrid, Win10.

Primary Laptop:
i5, NV1050 4GB, 8GB memory, 1x 1TB HDD, Win10.

Secondary Laptop:
i3, Intel 4000 series graphics, 6GB memory, 1x 500gb HDD, Win8.1.


AmenMoses
GameGuru Master
8
Years of Service
User Offline
Joined: 20th Feb 2016
Location: Portsmouth, England
Posted: 9th Feb 2021 22:55
Reading the links provided it is obvious that some conversion utilities make a bad job compare to others, any idea what GG uses?

I'm wondering if a lot of the complaints people have about their models not looking as good in GG is because they are using .png textures and GG is doing a poor job of converting them to the appropriate DDS equivalent.
Been there, done that, got all the T-Shirts!
PM
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 9th Feb 2021 23:02
I assumed the conversion was done by the graphics card rather than by the engine, but I guess it could be all handled in-engine. Except for lightmaps, they are created by the engine and are saved in an extremely compressed dds format. You can often see green and purple banding in the falloff of lights, which points to the lightmaps being BC1 compression, but whatever is creating them is doing a terrible job compressing them. It would be nice if they could be saved in BC7, but that might be a bigger resource overhead.

AE
OldFlak
GameGuru TGC Backer
9
Years of Service
User Offline
Joined: 27th Jan 2015
Location: Tasmania Australia
Posted: 17th Feb 2021 22:14
@Avenging Eagle - great read.

So for a rounded out tutorial, it would be interesting to add the pros and cons of _illumination maps.

OldFlak....
System Specs
i7-9700K 3.60GHz. ASUS NVidia GeForce GTX 1060 6GB. 32GB Themaltake ToughRam Z-ONE 3600.
Main Screen: HP 27" @1920x1080 - Screens 2\3: Acer 24" @ 1920 x 1080

Windows 10 Pro 64-bit Insider
aka Reliquia
PM
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 19th Feb 2021 13:52
OldFlak wrote: "So for a rounded out tutorial, it would be interesting to add the pros and cons of _illumination maps."


Sorry OldFlak, what do you mean? Surely you either want to have self-illuminating parts of your asset, or you don't. AFAIK, _emissive and _illumination do the same thing so no need for comparison there. I would recommend BC1 for _emissive and _illumination maps because of the reduced bit-depth, although some users may want to stretch to BC7 if they want more accurate colour rendition; it just means wasting 8 bits on an unused alpha channel.

AE
OldFlak
GameGuru TGC Backer
9
Years of Service
User Offline
Joined: 27th Jan 2015
Location: Tasmania Australia
Posted: 19th Feb 2021 22:23 Edited at: 20th Feb 2021 04:05
AE Thanks for reply

So like what are the trade-offs if you have say a 2048 texture with just one small part requiring illumination, as opposed to one with lots of parts.
So is the rendering hit for a texture with very little detail less, as opposed to one with lots of detail, or is it always the same hit no matter the detail on the map.

OldFlak....
System Specs
i7-9700K 3.60GHz. ASUS NVidia GeForce GTX 1060 6GB. 32GB Themaltake ToughRam Z-ONE 3600.
Main Screen: HP 27" @1920x1080 - Screens 2\3: Acer 24" @ 1920 x 1080

Windows 10 Pro 64-bit Insider
aka Reliquia
PM
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 20th Feb 2021 10:06
Ah I see, well the contents of the texture itself is irrelevant from a memory perspective; the full texture needs to be loaded into memory and the number of bits per pixel is the same regardless of whether it's a black pixel or a coloured pixel. Even compressing to BC1, your 2048 x 2048 _emissive or _illumination map will require 8MB of texture memory on top of all the other textures that asset has.

AE
OldFlak
GameGuru TGC Backer
9
Years of Service
User Offline
Joined: 27th Jan 2015
Location: Tasmania Australia
Posted: 20th Feb 2021 10:42 Edited at: 20th Feb 2021 10:44
AE - thanks for that.
I thought that would be the case.
I have been keeping my larger textures free of illumination in leu of putting multiple illuminated entities on the same texture, as long as they are meant to be used together on the same map.

OldFlak....
System Specs
i7-9700K 3.60GHz. ASUS NVidia GeForce GTX 1060 6GB. 32GB Themaltake ToughRam Z-ONE 3600.
Main Screen: HP 27" @1920x1080 - Screens 2\3: Acer 24" @ 1920 x 1080

Windows 10 Pro 64-bit Insider
aka Reliquia
PM
3dg3
9
Years of Service
User Offline
Joined: 30th Aug 2015
Playing:
Posted: 22nd Mar 2021 21:19
Great guide just thought I would share some info if someone has problems with BC4:
BC4 exported from Gimp does not show up in GG at all (with my Nvidia card at least). Importing Gimp BC4 file into Paint.net and exporting as BC4 fixes the file for GG
PM
cybernescence
GameGuru Master
11
Years of Service
User Offline
Joined: 28th Jan 2013
Playing: Cogwheel Chronicles
Posted: 22nd Apr 2021 21:21
This is a great info-tutorial, thanks.

In terms of textures consuming vram it’s actually worse as the engine also auto calculates mip maps for each loaded. These are a series of smaller resolution textures based on the primary loaded one that are displayed when models are in 3D distance of the scene. So this uses extra gpu memory on top of the base texture requirements.

Cheers.
GPU: GeForce RTX 2070 SUPER PassMark: 18125
Avenging Eagle
19
Years of Service
User Offline
Joined: 2nd Oct 2005
Location: UK
Posted: 26th Apr 2021 23:47
I was actually quite surprised to learn Game Guru auto-generates mip maps (I've come across them before). Paint.net gives you the option to create these automatically when saving to DDS, but I don't bother since Game Guru does it automatically. Apparently it adds about a third to the filesize and, presumably, the VRAM consumption.

Wizard of id's recent post on the Live Broadcast #40 thread casts fresh doubt on some of the maths I've used in this tutorial. It turns out the VRAM consumption may not be as high as first thought, which is actually quite relieving! Regardless, the advice on how to optimise your textures remains valid and, as far as I can tell, best practice.
https://forum.game-guru.com/thread/222699#msg2640752

AE
granada
Forum Support
22
Years of Service
User Offline
Joined: 27th Aug 2002
Location: United Kingdom
Posted: 29th Apr 2021 14:56
This compression tool might help people out

https://developer.nvidia.com/nvidia-texture-tools-exporter

Dave
Windows 10 Pro 64 bit
GeForce RTX™ 2070 GAMING OC 8G
AMD FX (tm)-9590 Eight-core Processor
31.96 GB RAM
3840 x 2160 ,60 Hz
PM

Login to post a reply

Server time is: 2024-12-21 16:43:39
Your offset time is: 2024-12-21 16:43:39