Asset pipelines are one of the pieces of game development I don’t see people talk about very often. The main concepts are often introduced and chatted about theoretically, sure: “convert your assets into binary packages, compress them, and load them at runtime,” but rarely do I find people chatting about the implementation details. It always strikes me as a little odd, because I’d argue it has the single biggest impact on development. How fast can the game load? How long do you have to wait to see the results of a change you make? How quickly can people add new assets? How quickly can developers add new asset types? All of these have a huge impact on both artists and programmers alike.

Because of this, in this post, I want to walk through how I handle assets in my little hobby engine called Progression. It doesn’t introduce any novel ideas, but I hope by sharing it, people can learn a bit and get more ideas of what to do (and what not to do) for their own asset pipelines. In this part 1 post, I will cover the overall asset pipeline, focusing on how the Converter is structured. In part 2 I will go over more specifics of how individual assets are converted (like how textures are composited and compressed).

Contents

High-Level Goals

It’s important to realize what you want most out of your asset pipeline. The entire reason I made one in the first place, is because slow iteration times make my brain short-circuit. There’s nothing more frustrating than making a tiny change and then needing to wait several seconds (or often minutes at real studios) to see if it worked. That’s why for me, the number one goal was to have fast load times. As for the other aspects:

  1. Convert Times: Medium priority for me, since it directly impacts iteration speed but happens less often (for me) than booting the game. If you’re in a studio with many people converting? I’d argue it should be extremely high priority, though it often seems to get pushed to the side.
  2. Runtime Performance and Quality: High priority. How much you do offline directly impacts how fast the engine can process and render things. I usually make the choice to favor high FPS, but if you find yourself annoyed at long convert times, then it can definitely be beneficial to sacrifice a little FPS for faster iteration times, especially in development builds.
  3. Disk Size: Low priority, since I’m just making small demos and not shipping a game. I’m actually really passionate about this for real studios (the Call of Duty download sizes make me want to cry) and I think compression is super cool. But for a hobby engine? It hardly matters until you’re about to ship.
  4. Ease of Adding New Assets and Asset Types: Low priority, since I just do small scenes, and rarely need to add new asset types after the initial set (textures, models, pipelines, scripts, etc).
  5. Useability: Medium-high priority. I wanted a simple and consistent interface to access assets, both in the engine C++ code, and in game scripts.

Before Asset Conversion

So how does this all work in my engine? Well, there are 3 main executables:

  • Engine.exe: the game
  • Converter.exe: runs before the game. Responsible for processing all of the assets that a scene will need when loaded in the Engine. It loads them, converts them all to binary, and groups them into binary packages. I call those packages “Fastfiles”, because that was what they were called in CoD, and I got used to it.
  • ModelExporter.exe: responsible for taking source model files (.obj, .fbx, .gltf, etc) and converting them into a common model file format (.pmodel) that the Converter can then use. This could have all been part of the Converter, but parsing model files can be really involved and sometimes the output can need some cleanup before being used in your real pipeline. So, it was helpful to have as a separate executable that runs once on every model you download, and then never again.

We’ll walk through from the beginning of the pipeline when the Converter starts up, to loading a fastfile in the Engine. The ModelExporter won’t be discussed further here.

Asset Files

In order to use an asset in Progression, it first has to be described in a Progression Asset File (.paf). These are just JSON text files which contain all of the info needed to load an asset. I don’t really recommend JSON in hindsight, but it is convenient to use something like json or markup languages that already have fast parsers written for them. Here is an example of a couple assets:

[{ "Image": {
    "name": "kloppenheim",
    "equirectangularFilename": "images/skies/kloppenheim_06_2k.exr",
    "semantic": "ENVIRONMENT_MAP"
}},
{ "Pipeline": {
    "name": "frustum_cull_meshes",
    "computeShader": "frustum_cull_meshes.comp"
}}]

You can see that each asset first defines the AssetType (Image and Pipeline above). Every asset also needs to have a name. In my engine, the name is the GUID. Different asset types can share the same name, but not two assets within the same type. I know some people use hashes for GUIDs, but I find using plain text names extremely convenient for readability, searchability, and debugging. I highly recommend them. Beyond the name, the parameters are specific to the asset and are used to define how to load the asset. These JSON definitions are directly parsed into [AssetType]CreateInfo structures which look like this:

struct BaseAssetCreateInfo
{
    string name;
};

struct ModelCreateInfo : public BaseAssetCreateInfo
{
    string filename; // relative path to pmodel file
    bool recalculateNormals = false;
};

...

So the first thing the Converter does, is parse every single .paf file into a bunch of [AssetType]CreateInfo structs. This defines the list of assets that a scene can reference, and all the info needed to load them, if so.

Parsing .paf Files

How exactly you go from JSON -> CreateInfo isn’t super important, so I’ll just cover the main points here. The full code for this can be seen here in asset_parer.hpp, asset_parser.cpp, and asset_file_database.cpp.

Just like how each asset type defined an [AssetType]CreateInfo struct that derived from BaseAssetCreateInfo, each type also defines an [AssetType]Parser that derives from BaseAssetParser. This is just so we can hold the parsers in a single array BaseAssetParser* g_assetParsers[ASSET_TYPE_COUNT], and then call a virtual Parse function that parses the JSON and returns a filled out [AssetType]CreateInfo. The caveat is that there is one extra level of inheritance with a templated class, to actually allocate the specific CreateInfo type.

One interesting aspect to consider here is inheritance (or parenting) in the JSON itself. Say you spent a lot of time filling out a complicated asset with many parameters. Later on it turns out you need to make 3 more just like the first one, with only 1-2 parameters changed. You could copy and paste the original asset definition 3 times, or you could declare that the 3 new assets inherit from that original asset. That way every parameter is copied except the name, and you only have to specify new parameters instead of all of them. In Progression this is done by specifying a “parent” parameter in the JSON. Then in the code, it is used like this:

std::shared_ptr<BaseAssetCreateInfo> parentCreateInfo = nullptr;
if ( value.HasMember( "parent" ) )
    parentCreateInfo = FindAssetInfo( assetType, value["parent"].GetString() );

std::shared_ptr<BaseAssetCreateInfo> info;
info = g_assetParsers[assetType]->Parse( value, parentCreateInfo );
...
...
virtual BaseInfoPtr Parse( const rapidjson::Value& value, ConstBaseInfoPtr parentCreateInfo ) override
{
    // create the specific [AssetType]CreateInfo, not a BaseAssetCreateInfo
    auto info = std::make_shared<DerivedInfo>();
    
    // if this asset has a parent, copy all of the parameters except the name
    if ( parentCreateInfo )
        *info = *std::static_pointer_cast<const DerivedInfo>( parentCreateInfo );
    const std::string assetName = value["name"].GetString();
    info->name                  = assetName;

    // finally, fill out the createInfo data by parsing the JSON
    return ParseInternal( value, info ) ? info : nullptr;
}

The caveat here, is that I only do a single-pass over the asset files. So, if an asset wants to use a parent, that parent must be defined earlier in the same asset file. I personally don’t mind this restriction, and I even think it helps keep things more contained. I definitely remember artists complaining about this at SHG though, which also used single pass asset parsing.

Scanning The Scene For Used Assets

Now that we have the possible assets parsed and ready, the next step is to actually figure out which of those are needed for the scene. In my pipeline, the Converter is run per-scene like so: Converter.exe [sceneName]. My scene files are also JSON:

[{ "Camera": { "position": [ -15, -25, 0 ], "rotation": [ 0, 0, 0 ], "nearPlane": 0.02 }},
{ "Skybox": "kloppenheim" },
{ "Script": "cameraController" },
{ "DirectionalLight": { "color": [ 1, 1, 1 ], "direction": [ 0, 0, -1 ] } },
{ "Entity": {
    "NameComponent": "dragon",
    "Transform": { "position": [ 3, 0, 0 ], "rotation": [ 90, 0, 90 ], "scale": [ 1, 1, 1 ] },
    "ModelRenderer": { "model": "dragon", "material": "blue" }
}}]

So in the example scene above, the converter would need to convert: the image ‘kloppenheim, the script ‘cameraController’, the model ‘dragon’, and the material ‘blue’.

Referenced Assets

Assets can also implicitly reference other ones, so once the scene is parsed we call AddReferencedAssets on each of these assets. For example:

void GfxImageConverter::AddReferencedAssetsInternal( ConstDerivedInfoPtr& imageInfo )
{
    if ( imageInfo->semantic == GfxImageSemantic::ENVIRONMENT_MAP )
    {
        auto irradianceInfo      = std::make_shared<GfxImageCreateInfo>( *imageInfo );
        irradianceInfo->name     = imageInfo->name + "_irradiance";
        irradianceInfo->semantic = GfxImageSemantic::ENVIRONMENT_MAP_IRRADIANCE;
        AddUsedAsset( ASSET_TYPE_GFX_IMAGE, irradianceInfo );

        auto reflectionProbeInfo      = std::make_shared<GfxImageCreateInfo>( *imageInfo );
        reflectionProbeInfo->name     = imageInfo->name + "_reflectionProbe";
        reflectionProbeInfo->semantic = GfxImageSemantic::ENVIRONMENT_MAP_REFLECTION_PROBE;
        AddUsedAsset( ASSET_TYPE_GFX_IMAGE, reflectionProbeInfo );
    }
}

You can see for images that are environment maps, we also generate two additional images: the irradiance map, and the reflection probe. These are later used for IBL lighting by the renderer. This can be used by any asset type. Pipeline assets add the individual shaders that they reference, for example.

Non-Inferable Assets

While parsing the scene this way gets most of the assets you need, what about assets that your Lua scripts might try to load? The only way to handle these is to explicitly make a list of assets that might be used by the script. For Progression, scene files are stored as [sceneName].json. But the Converter also checks to see if there exists a corresponding [sceneName].csv file in the same directory when processing a scene. If it does, then it loads this file in addition to the .json one. These files are simply lists of assets, in the form [AssetType],[AssetName] on each line.

One other area of non-inferable assets is scene-agnostic assets that are not tied to any game objects. One example of this would be compute shaders. These are also handled by adding them to .csv files like assets in scripts, but since these .csvs are agnostic to real scenes, they are stored in a special directory: assets/scenes/required/. When the Converter runs it processes every file in this directory, to always make sure the required assets are up to date and available. The Engine then usually manually loads these fastfiles at startup, like how the renderer loads the gfx_required fastfile to get all of the shaders it needs.

Asset Conversion

Now that we have the list of assets the scene uses, and all of their CreateInfo’s, it’s time to actually convert them. If you want to see the full code, look at ConvertAssets() in converter_main.cpp, all of base_asset_converter.hpp, and at each asset type’s converter. Just like the BaseAssetCreateInfo and BaseAssetParser pattern, there is also a BaseAssetConverter class:

using ConstBaseCreateInfoPtr = const std::shared_ptr<const BaseAssetCreateInfo>;

class BaseAssetConverter
{
public:
    const AssetType assetType;

    BaseAssetConverter( AssetType inAssetType ) : assetType( inAssetType ) {}
    virtual ~BaseAssetConverter() = default;

    virtual string GetCacheName( ConstBaseCreateInfoPtr& baseInfo ) { return ""; }
    virtual AssetStatus IsAssetOutOfDate( ConstBaseCreateInfoPtr& baseInfo ) { return AssetStatus::UP_TO_DATE; }
    virtual bool Convert( ConstBaseCreateInfoPtr& baseInfo ) { return true; }
    virtual void AddReferencedAssets( ConstBaseCreateInfoPtr& baseInfo ) {}
};

And just like the BaseAssetParser, there is also one extra intermediate base class, to handle the type casting to specic [AssetType]CreateInfo and any type-agnostic conversion code.

template <typename DerivedAsset, typename DerivedInfo>
class BaseAssetConverterTemplate : public BaseAssetConverter
{
    ...

What Exactly Is A Converted Asset

We haven’t actually covered what it means for an asset to be converted yet. In Progression, a converted asset is one that has been loaded, serialized to binary, and saved to a file. Specifically, these files all get saved under the asset cache directory, which is located at [projectDir]/assets/cache/. The filenames all take the pattern [assetName]_[createInfoHash]_[versionNumber].ffi, where .ffi stands for “fastfile intermediate”. For example, my assets/cache/models/ directory currently looks like this:

cube_5919176923328749623_v6.ffi
dragon_17433001533983433154_v6.ffi
sponza_vulkansamples_15064524848871577573_v6.ffi

I refer to the [assetName]_[createInfoHash] component as the ‘cache name’, and this is what the BaseAssetConverter::GetCacheName function returns. You don’t have to use this naming convention exactly, but I am relying on the fact that any changes to the asset’s CreateInfo data will change the cache name. A couple of naming alternatives might be:

  1. Just include the asset’s name right into the hash. This would give shorter and more consistent filenames, but I find having the asset name prefix makes for easier debugging when anything goes wrong in the Converter.
  2. Don’t hash everything, but rather just convert (some or all of) the CreateInfo data directly to a string. For example, if you had an asset that just had a dozen bools, you could just append a 1 or 0 to the cache name for each of the bools: assetName_100011110001_v0.ffi. This is nice because you can fully identify the entire CreateInfo just by looking at the cache name, which makes for more powerful debugging. However, I find most real asset CreateInfos have either a lot of parameters or long string parameters like filenames. As a result, using this style would make the cache name super long and unfeasible.

Is The Asset Is Out-Of-Date

We only want to actually convert an asset if it’s out-of-date. This is the job of the BaseAssetConverter::IsAssetOutOfDate function. There are two components to this:

  1. Has the asset been converted before?
    This one is assetType-agnostic and easy: just get the cache name for the asset and check to see if that cache file (.ffi) exists! This is why I said I am “relying on the fact that any changes to the asset’s CreateInfo data will change what the cache name.” It gives us a quick way to see if we’ve ever converted a given combination of settings for an asset before, because if it hasn’t, then no matching file will be found.
  2. If it has been converted before, did any of the asset’s source data change?
    This one depends on the asset type. For example, if we are converting a model asset whose source file is ‘dragon.obj’, then we need to check the timestamp on that .obj file. If the timestamp is newer than our cache file’s timestamp, then we need to reconvert. For images, you would have to check the source .png file(s) (possibly multiple, for cubemaps) instead.

Asset Versioning

Assets can change over time. How many parameters they have, their values, how they’re serialized, etc. When this happens, it naturally changes what the converted assets binary would be as well. This creates a potential problem: if we update how a model asset is converted, then we need to mark every single model as out-of-date regardless of what the timestamps are. The way this is done is through asset version numbers. Each asset type has a version number, and when you change how an asset is converted, you bump the version number for that asset type. Since these version numbers are included in the cache filename, bumping the version number always causes those assets to be considered out-of-date.

Finally Converting The Asset

Just like all the other classes before this, I use a BaseAsset virtual class, that all the real asset types inherit from:

class BaseAsset
{
public:
    BaseAsset() = default;
    virtual ~BaseAsset();

    virtual bool Load( const BaseAssetCreateInfo* baseInfo ) { return false; }
    virtual bool FastfileLoad( Serializer* serializer )       = 0;
    virtual bool FastfileSave( Serializer* serializer ) const = 0;
    virtual void Free() {}
    
    ...
};

A brief explanation of these functions:

  • Load: takes in the CreateInfo and is expected to load the asset from source. Used by the Converter, but compiled out of the Engine, since the Engine should always be loading converted assets, not source assets.
  • FastfileLoad: loads a binary converted asset. Used by the Engine, not the Converter.
  • FastfileSave: serializes a converted asset. Used by the Converter, not the Engine.
  • Free: Engine only, mainly used for freeing up gpu resources the asset might have.

So, in the Converter all we really to do with out-of-date assets is call Load with the appropriate CreateInfo, and then FastfileSave:

virtual bool ConvertInternal( ConstDerivedInfoPtr& derivedCreateInfo )
{
    DerivedAsset asset;
    const std::string cacheName = GetCacheName( derivedCreateInfo );
    asset.cacheName             = cacheName;
    if ( !asset.Load( derivedCreateInfo.get() ) )
    {
        LOG_ERR( "Failed to convert asset %s %s", g_assetNames[assetType],
            derivedCreateInfo->name.c_str() );
        return false;
    }

    if ( !AssetCache::CacheAsset( assetType, cacheName, &asset ) )
    {
        LOG_ERR( "Failed to cache asset %s %s (%s)", g_assetNames[assetType],
            derivedCreateInfo->name.c_str(), asset.cacheName.c_str() );
        return false;
    }

    return true;
}

The AssetCache::CacheAsset simply opens the appropriate .ffi file, and then calls FastfileSave to serialize the asset’s data into the file.

Common Pitfall

Sometimes, when you’re adding a new asset, you mess up and have some bugs in either Load or FastfileSave. If the bug causes the Converter to crash while saving the .ffi file, this causes an issue. When you fix the bugs and go to run the Converter again, it will see that the .ffi exists, with a brand new timestamp, and think the asset is up to date, even though it was only a partially written file before the crash! The only way to fix the issue at that point, would be to either force convert the asset that failed, or delete the invalid .ffi file.

The way I try to mitigate this happening is first by not opening the .ffi until Load has fully finished. Second, I wrap the call to FastfileSave in a try/catch block, and if there is an exception I delete the file in the catch block. This seems to work reasonably well, though I think a better way might just be to write to a temporary .ffi file, and then if everything succeeds, rename that file to the intended cache name. That way, if your computer blue screens or you lose power while an asset is being serialized, you could just run the Converter again like normal.

Parallelization

With the system described so far, scenes need to be processed one at a time, loaded single-threaded. For my engine, that’s really not an issue, since it’s very quick. The actual asset conversion is the important piece to parallelize. Fortunately, we’ve set things up so that each asset can be processed independently, all they need is their CreateInfos. So, I just get the list of all the out-of-date assets, and then convert them all in parallel using OMP. This works great, though the one caveat is that by default OMP doesn’t allow nested parallel calls. So if any of your asset Load functions are parallelized the same way, you can consider adding omp_set_nested( 1 ); to the start of the Converter to allow nested parallelization. I found this helpful because my environment maps are pretty slow to process, so the converter would stall waiting for that to finish without nested parallelization.

Creating The Fastfiles

At this point in the Converter, every asset has been converted and is up to date, so it’s time to create the fastfile. I haven’t mentioned what the fastfile actually is yet though: it’s simply all of the converted assets bundled into one file. This is to make load times faster by just having a single file IO call that can be read serially, instead of a ton of file IO by using the .ffi files directly. You can add whatever metadata you want, but currently, my fastfiles are literally just lists of [AssetType][AssetBinary] pairs.

Now the question is: do we need to rebuild the fastfile? Well, it’s very similar to checking for out-of-date assets, with one extra case:

  1. Does the fastfile exist? I store all of mine in the asset/cache/fastfiles/ with the naming convension [sceneName]_[version].ff. If this file is not found, the fastfile must be built.
  2. If the fastfile does exist, then we need to compare the fastfile’s timestamp, to every single asset used in the scene. If any of the asset timestamps are newer than the fastfile, it means the fastfile is out-of-date. It’s not enough to just check if ( numberOfOutOfDateAssets > 0 ), because the assets could have been converted from a different scene, but are still newer than the current scene’s fastfile.
  3. Finally, we need to check if the list of assets we would put in the fastfile if we build it, is different than the list of assets that are already in the previously built fastfile. To track this, every time a fastfile is built I export a text file of [AssetType],[AssetName] pairs for every asset used in that fastfile, and store it in asset/cache/assetlists/[sceneName].txt. The next time the converter is run, we can compare the current list of assets to the ones in this textfile. If the lists differ at all, the fastfile must be rebuilt.

Inter-Asset Dependencies + Asset Ordering

Sometimes assets reference other assets, like a material referencing its albedo and normal textures. So, when we actually deserialize these assets in Engine, we either need to make sure the images were already loaded before the materials get loaded, or do two-pass loading to fixup the references. I chose the first option for simplicity. This means, however, that the order of assets in our fastfile matters. I do this by grouping the assets by type, and having an explicit type order:

enum AssetType : u8
{
    ASSET_TYPE_GFX_IMAGE = 0,
    ASSET_TYPE_MATERIAL  = 1,
    ASSET_TYPE_SCRIPT    = 2,
    ASSET_TYPE_MODEL     = 3,
    etc...
};

As you can see, all of the images are first. That’s because they don’t reference anything else. Materials need to go after images because they reference images. Scripts don’t reference anything, so they can be ordered anywhere, but models can reference materials, so they have to be after materials.

Versioning

Just like converted assets, we have to consider when we make changes to the converter. If we change the version number on any of the assets, we have to rebuild the fastfile. We also need to have a separate version number just for fastfiles for when the fastfile serialization or metadata is changed, independent of converted assets. In Progression it looks like this:

constexpr i32 g_assetVersions[] = {
    9,  // ASSET_TYPE_GFX_IMAGE, "New name serialization"
    10, // ASSET_TYPE_MATERIAL,  "New name serialization"
    1,  // ASSET_TYPE_SCRIPT,    "New name serialization"
    6,  // ASSET_TYPE_MODEL,     "Add meshlet cull data"
    etc....
};

constexpr u32 PG_FASTFILE_VERSION = 10 + ARRAY_SUM( g_assetVersions ); // reason

The comments are intentionally there so that if two people made different changes and had to bump the same version number, there would be a merge conflict. Without the comment, it would auto-merge to only bump the number by one, even though two changes actually happened.

One note here, is that I append the version number to the fastfile’s name. This works, but anytime it gets bumped the old fastfiles just lay around and take up space. I think it’s a better idea to serialize the version number into the fastfile, and then check to see in Engine if the version number matches what it expects. This would keep the cache directory smaller when version bumps happen.

Getting Assets In Engine

In Progression, the AssetManager handles the loading and storing of every asset. See asset_manager.hpp and asset_manager.cpp for the full code. This is where the LoadFastFile function is implemented, and it’s very simple: memory map the fastfile, look at the first asset type, allocate it + call its FastfileLoad, and then move on to the next asset and repeat. There is a hash map per AssetType for storing these:

unordered_map<string, BaseAsset*> g_resourceMaps[ASSET_TYPE_COUNT];

You can see that I’m just using plain pointers, but you probably want your loaded assets to be ref-counted. I haven’t added this yet, but mostly because I load one demo scene at a time, and it doesn’t really matter :) If you ever manage multiple scenes though, when you unload a scene, you will want to free the assets that are not shared by any other scene.

As for accessing these loaded assets, I had decided early on that I wanted to have the interface AssetManager::Get<AssetType>( assetName ). This just seemed nice and simple to me, which is one of the goals I had. An example would be Material* mat = AssetManager::Get<Material>( "wood_floor" );. To make this happen, we need a way to convert from actual template AssetType to its ASSET_TYPE enum value, to appropriately index into g_resourceMaps[ASSET_TYPE_COUNT]. The only way I know how to do that in C++, is to use static variables:

struct GetAssetTypeIDHelper
{
    static u32 IDCounter;
};

template <typename Derived>
struct GetAssetTypeID : public GetAssetTypeIDHelper
{
    static u32 ID()
    {
        static u32 id = IDCounter++;
        return id;
    }
};

We can then do things like GetAssetTypeID<Material>::ID() to get an index from a type. The caveat is that we have to initialize these in the same order their type appears in the ASSET_TYPE enum, which I do at the beginning of AssetManager::Init:

void Init()
{
    GetAssetTypeID<GfxImage>::ID(); // ASSET_TYPE_GFX_IMAGE
    GetAssetTypeID<Material>::ID(); // ASSET_TYPE_MATERIAL
    GetAssetTypeID<Script>::ID();   // ASSET_TYPE_SCRIPT
    etc...

And with that, we’ve covered the entire asset pipeline from start to finish! I hope that gives some more insight into how these pipelines work.

A Few Final Remarks

There are a few final things I’d like to discuss about the whole pipeline, particularly in regards to the high-level goals we initially set out to achieve:

Performance: The main goal was to have fast load times. Did we succeed? I’d say yes! Here are the load times for two scenes:

  • Crytek Sponza: 41ms, for a 77MB fastfile containing 77 assets
  • Intel Sponza: 428ms, for a 1.212GB fastfile containing 86 assets (has much larger models and 4k textures compared to Crytek’s).

I do have an NVME SSD which helps a lot, but I’m still quite pleased with the load times. The load times also include creating and uploading the textures + models to the GPU. There was also a secondary goal of decent Converter performance. Now this one heavily depends on what scene you are converting, but overall, my convert times are OK, but not great. For example, a fully fresh convert of Crytek’s Sponza takes 2.5 seconds on my machine, while the Intel Sponza (plus a big skybox) takes 36 seconds. However, the second convert once nothing has changed is only 21ms. This is largely because I haven’t taken the time to optimize convert times, and for a hobby engine kind of lean towards slow but simple converters compared to a lot of complicated fast paths.

Hot Reloading: Currently, my engine doesn’t support hot reloading of assets. It did, a long long time ago, but it was implemented awkwardly, so I ripped it out. It’s something I’d definitely like to add again one day, but honestly for Progression? It doesn’t add a lot of value, when booting the Engine and loading a Scene takes less than one second. The iteration time is already very low :)

Breakage Frequency and Debugging: Surprisingly, this breaks less often than you’d think! It is fairly easy to add bugs, especially with custom serialization and deserialization. However, I find that with the pipeline’s simple structure and naming conventions, it rarely takes me a long time to figure out what mistake I made. In my mind, as long as bugs don’t happen too often and are quick to fix, then you’ve succeeded at something.

Disk Size + Compression: This is a huge one that I haven’t talked about yet. Any real engine will compress their assets/fastfiles/packages, typically in a format that gives very fast decompression rates (LZ4, Oodle, etc). Progression currently doesn’t compress anything however. I would like to add it of course, but part of the reason I haven’t bothered yet, is because of some small experiments with LZ4. Since I haven’t played around with RDO on my textures, using LZ4 on them usually only saves between 0-2%. Even on the rest of the assets, the savings are not amazing. For example, here are the results of using LZ4 on the entire fastfile for the two Sponza scenes I mentioned above:

  • Crytek Sponza: 14.1% savings with default compression, and 20.9% savings with LZ4_HC
  • Intel Sponza: 11% savings with default compression, 18% with LZ4_HC

Not really substantial enough to make me add it yet, especially since LZ4_HC isn’t super fast. I’d rather just keep the fastest iteration times for now. I do love compression, however, so I definitely want to revisit this at some point, and try to use RDO and make assets compression-friendly :)


And with that, I think I’ve covered everything I wanted to. I’ll definitely post a part 2, going over how specific assets are converted, but I hope the general pipeline structure makes sense, and why I chose to design it like that. Thanks to everyone who stuck around to read the whole thing, this definitely got longer than I thought it would. Leave a comment if you’d like; I’d love to hear feedback, and love to hear other decisions people made when structuring their asset pipelines!