<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[SlinDev]]></title><description><![CDATA[Computer Graphics, VR and more]]></description><link>https://slindev.com/</link><generator>Ghost 5.68</generator><lastBuildDate>Tue, 07 Apr 2026 14:35:34 GMT</lastBuildDate><atom:link href="https://slindev.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Blobby Tennis release for Oculus Quest]]></title><description><![CDATA[<p>On Friday night I had the urge to release something for Oculus Quest, so I dug out Blobby Tennis, which I already tweaked to work on the Quest, made a build and put it on SideQuest, the most amazing way to sideload all things Oculus Quest and Go and any</p>]]></description><link>https://slindev.com/blobby-tennis-release-for-oculus-quest/</link><guid isPermaLink="false">5d78118b75824c0235b58dfc</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Tue, 10 Sep 2019 21:13:23 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/09/11162678_1854052231512661_3150400825633800192_n.png" medium="image"/><content:encoded><![CDATA[<img src="https://slindev.com/content/images/2019/09/11162678_1854052231512661_3150400825633800192_n.png" alt="Blobby Tennis release for Oculus Quest"><p>On Friday night I had the urge to release something for Oculus Quest, so I dug out Blobby Tennis, which I already tweaked to work on the Quest, made a build and put it on SideQuest, the most amazing way to sideload all things Oculus Quest and Go and any other android based VR device. With Oculus tightly controlling the games released on Quest, SideQuest found quite some popularity.It also shows in the download numbers, which now after about 4 days are just about to reach 1000, which seems pretty good for VR side loading.</p>
<p><strong>Blobby Tennis for Oculus Quest can be downloaded here for free: <a href="https://sidequestvr.com/app/253/blobby-tennis">https://sidequestvr.com/app/253/blobby-tennis</a></strong></p>
<p>It&apos;s exactly the same game as on PC, but without shadows. But actually the performance headroom I had before is gone, so I think the shadows may be turned on, except not visible cause it&apos;s missing some minor parts in my Vulkan renderer...</p>
]]></content:encoded></item><item><title><![CDATA[ProjectZ - Day 429 (Lights, Shadows and Reflections)]]></title><description><![CDATA[<p>I replaced my zombies with some more zombie like models and started tweaking the graphics :)</p><h3 id="environment">Environment</h3><p>My static environment is sharing a single lightmap texture which I baked in Blender. The static detail objects such as chairs and beds all share another lightmap. My lightmaps have a resolution of 4096x4096</p>]]></description><link>https://slindev.com/projectz-day-429-lights-shadows-and-reflections/</link><guid isPermaLink="false">5d6ef2d575824c0235b58dbf</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Tue, 03 Sep 2019 23:19:32 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/09/Bildschirmfoto-2019-09-04-um-01.22.19.png" medium="image"/><content:encoded><![CDATA[<img src="https://slindev.com/content/images/2019/09/Bildschirmfoto-2019-09-04-um-01.22.19.png" alt="ProjectZ - Day 429 (Lights, Shadows and Reflections)"><p>I replaced my zombies with some more zombie like models and started tweaking the graphics :)</p><h3 id="environment">Environment</h3><p>My static environment is sharing a single lightmap texture which I baked in Blender. The static detail objects such as chairs and beds all share another lightmap. My lightmaps have a resolution of 4096x4096 and are later compressed using astc (with a block size of 6x6) for android and BC4 (the quality using BC1 was unusable) for windows/linux/macos. To do the lightmap UV layout I used the &quot;Texture Atlas&quot; plugin for blender (which I believe is a default plugin included in blender, but disabled).<br>The biggest challenge here are seams when mipmaps are being used in the distance, but wasting more texture space for bigger margins or using a mipmap bias for sampling the texture helps to hide those.</p><p>All lighting in this screenshot is baked:<br></p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-09-03-um-16.31.07.png" class="kg-image" alt="ProjectZ - Day 429 (Lights, Shadows and Reflections)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2019-09-03-um-16.31.07.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2019-09-03-um-16.31.07.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2019-09-03-um-16.31.07.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-09-03-um-16.31.07.png 2144w" sizes="(min-width: 720px) 720px"></figure><h3 id="zombies">Zombies</h3><p>I wanted the Zombies to be effected by lights, one approach would be to check the lightmap below them and use that color, but getting that color takes a bit of work which I wasn&apos;t willing to put in at this time. Instead I made it easier by rendering another lightmap into a plane at a height of 1.5m inside the corridors. My levels are all on the same height anyway so just using a plane works great. I chose a plane that is big enough to fit any level and allows me to just use the texture directly in the Zombies fragment shader with hardcoded values for scaling and shifting so the texture pixels align with their correct position inside the level.</p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-09-03-um-16.54.52.png" class="kg-image" alt="ProjectZ - Day 429 (Lights, Shadows and Reflections)" loading="lazy" width="1622" height="1440" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2019-09-03-um-16.54.52.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2019-09-03-um-16.54.52.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2019-09-03-um-16.54.52.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-09-03-um-16.54.52.png 1622w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Lightsmask_lvl13.png" class="kg-image" alt="ProjectZ - Day 429 (Lights, Shadows and Reflections)" loading="lazy" width="2000" height="2000" srcset="https://slindev.com/content/images/size/w600/2023/10/Lightsmask_lvl13.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Lightsmask_lvl13.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Lightsmask_lvl13.png 1600w, https://slindev.com/content/images/2023/10/Lightsmask_lvl13.png 2048w" sizes="(min-width: 720px) 720px"></figure><p>That texture is then sampled in the fragment shader and effects the ambient value and a very basic diffuse light source from above the zombie (all my lights are on the ceiling, this &quot;light&quot; is just hardcoded). I combine this with some weaker light from below the zombie as fake indirect light to make it look a bit more interesting.</p><p>The shader code for this looks like this:</p><pre><code>float3 normalizedWorldNormal = normalize(vert.worldNormal);
float lights = saturate(normalizedWorldNormal.y) + saturate(-normalizedWorldNormal.y)*0.3; //These are the lights from above and below using lambert diffuse shading
float brightness = texture1.Sample(linearRepeatSampler, (-vert.worldPosition.xz + float2(38.0, -12.0))/80.0).r; //My plane is 80x80m and one of it&apos;s corners is at (38.0, -12.0)
float3 light = ambient * (brightness * 0.6 + 0.3) + lights * brightness * 2.0; //The final term to multiply with the zombies texture color, the numbers are the result of trial and error.
</code></pre><p>This is it in action:</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://slindev.com/content/media/2023/10/ZombieShading_thumb.jpg" data-kg-custom-thumbnail>
            <div class="kg-video-container">
                <video src="https://slindev.com/content/media/2023/10/ZombieShading.mp4" poster="https://img.spacergif.org/v1/1280x720/0a/spacer.png" width="1280" height="720" loop autoplay muted playsinline preload="metadata" style="background: transparent url(&apos;https://slindev.com/content/media/2023/10/ZombieShading_thumb.jpg&apos;) 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon">
                        <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container kg-video-hide">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:11</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate">1&#xD7;</button>
                        <button class="kg-video-unmute-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><p>Zombies also have a drop shadow. I am not completely happy with how it looks and may end up tweaking it some more. It uses a light for each zombie that darkens the level geometry within it&apos;s radius. This can also be seen in the video above. My shader supports 8 of those.</p><h3 id="doors">Doors</h3><p>I started out with having a fixed ambient value for every moving door which I manually assigned in my level json files. I came up with some acceptable values, but this does not adjust when moving the doors for example. The new idea here was to reuse the plane lightmap from above.<br>All I did for the doors was blurring that lightmap a tiny bit to remove some artifacts and use the lightmap pixels as ambient values for the door. There are lots of cases where this is not perfect and doors tend to be too bright from one side due to the lightmap rendering being without doors, but it looks much better than before.</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://slindev.com/content/media/2023/10/DoorShading_thumb.jpg" data-kg-custom-thumbnail>
            <div class="kg-video-container">
                <video src="https://slindev.com/content/media/2023/10/DoorShading.mp4" poster="https://img.spacergif.org/v1/1280x720/0a/spacer.png" width="1280" height="720" loop autoplay muted playsinline preload="metadata" style="background: transparent url(&apos;https://slindev.com/content/media/2023/10/DoorShading_thumb.jpg&apos;) 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon">
                        <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container kg-video-hide">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:05</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate">1&#xD7;</button>
                        <button class="kg-video-unmute-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><h3 id="floor">Floor</h3><p>I&apos;ve been experimenting with reflections using mirrored geometry and a transparent floor before, but the bigger level got too big for this to run smooth. Instead I am now using static cubemap reflections. The cubemap is rendered in blender using the &quot;Render Cube Map&quot; plugin (<a href="https://github.com/dfelinto/render_cube_map">https://github.com/dfelinto/render_cube_map</a>), I then use the command line tool &quot;cmft&quot; (<a href="https://github.com/dariomanesku/cmft">https://github.com/dariomanesku/cmft</a>) to blur it. And then a shader to mix these reflections into the floor based on angle using a fresnel approximation:</p><pre><code>float3 incidentVector = normalize(vert.worldPosition - cameraPosition);
float3 normalizedWorldNormal = normalize(vert.worldNormal);
float3 reflectionDir = reflect(incidentVector, normalizedWorldNormal); //Reflect camera direction on the surface normal
float3 reflections = texture2.Sample(linearRepeatSampler, reflectionDir).rgb; //Get pixel position

float reflectionFactor = 0.3 * pow(1.0 + dot(incidentVector, normalizedWorldNormal), 5);
color.rgb = lerp(color.rgb, reflections, reflectionFactor);
</code></pre><p>Ideally the cubemap should be created from the camera position mirrored along the reflecting plane. In my case about 1.8m below the floor, but I had some problems with clipping away the floor in blender. Also this cubemap is not moving with the camera, so I chose a position in the middle of a corridor, which doesn&apos;t work that well for rooms. But making it a bit blurry helps with hiding how wrong it actually is and it looks a lot more interesting than without the reflections and is quite cheap.</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://slindev.com/content/media/2023/10/FakeReflections_thumb.jpg" data-kg-custom-thumbnail>
            <div class="kg-video-container">
                <video src="https://slindev.com/content/media/2023/10/FakeReflections.mp4" poster="https://img.spacergif.org/v1/1280x720/0a/spacer.png" width="1280" height="720" loop autoplay muted playsinline preload="metadata" style="background: transparent url(&apos;https://slindev.com/content/media/2023/10/FakeReflections_thumb.jpg&apos;) 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon">
                        <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container kg-video-hide">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:11</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate">1&#xD7;</button>
                        <button class="kg-video-unmute-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><h3 id="particles">Particles</h3><p>I also added some particle effects, but I am not exactly great at making those. One for breaking doors and another one for exploding zombies. I kinda like my broken door particle effect, while the other one is still not quite right. There is nothing very interesting about these. My particle system is generating a mesh for each particle material on the CPU where each particle has 4 vertices. Each vertex has the particles position, color and it&apos;s position in the quad. The vertex shader then transforms the vertices into a camera aligned quad and the fragment shader samples the texture and multiplies it with the vertex color. The result is a very flexible and decently fast particle system that works fine for low to medium amounts of particles.</p><p>Here is a video of all these changes together:</p><div class="light-video-player" data-service="youtube" data-id="vm01KxJJpnI"></div>]]></content:encoded></item><item><title><![CDATA[Project Z - Day 0 to 400 (Recap)]]></title><description><![CDATA[<p>Z Mission was the game my team developed during one of InnoGames game jams about 1 year and a half ago. It was a simple puzzle game with zombies, with the goal for the player to get out of wherever he is by evading zombies and luring them out of</p>]]></description><link>https://slindev.com/project-z-recap/</link><guid isPermaLink="false">5d49d1a4255e8a2ea2ceaffa</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Tue, 06 Aug 2019 21:56:30 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/projectZ-recap-header.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://slindev.com/content/images/2019/08/projectZ-recap-header.jpg" alt="Project Z - Day 0 to 400 (Recap)"><p>Z Mission was the game my team developed during one of InnoGames game jams about 1 year and a half ago. It was a simple puzzle game with zombies, with the goal for the player to get out of wherever he is by evading zombies and luring them out of the way and trapping them behind doors. We used the at the time latest version of Rayne as game engine and while I didn&apos;t bring a VR headset to the game jam, we did develop it with VR in mind.</p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2018-01-28-um-15.41.38.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2018-01-28-um-15.41.38.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2018-01-28-um-15.41.38.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2018-01-28-um-15.41.38.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2018-01-28-um-15.41.38.png 2144w" sizes="(min-width: 720px) 720px"></figure><p>When Oculus released the Oculus GO a bit later that year I wanted to prepare Rayne for the Oculus Quest by remaking Z Mission as Project Z for the GO, which just like the Quest uses Android. I thought I could turn this into a full little game within maybe 4 - 6 months, as always that estimate was very wrong :).</p><p>This was also a good opportunity to get the Vulkan renderer working and to feature parity with my Metal and D3D12 rendering. But because of just a few random hints of the GO getting support for Vulkan but no actual support yet, I started out implementing it on Windows first which worked out quite alright. I am sure there are a lot of things that can still be optimized and some I already improved and I am not taking advantage of multithreading in the renderer yet, but most things are just very similar to D3D12 but with more consistent naming. The one big annoying thing with Vulkan is shader compiling. Because while it is of course possible to include a shader to spirv compiler in the engine, it&apos;s not exactly part of Vulkan which just wants spirv memory blobs. With my system of having an &#xDC;bershader and enabling features by recompiling it with different defines turned on, this was a problem. As a result I ended up writing and assigning specialized shaders that I precompiled by hand (using a spirv compiler obviously). Which was turning into a lot of work to maintain... I solved this recently by automating the process of having one HLSL shader that gets recompiled for all possible permutations for the 3 different rendering APIs (more about that in another post maybe).</p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Image-2018-05-24-at-4.36.42-PM.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="1560" height="1211" srcset="https://slindev.com/content/images/size/w600/2023/10/Image-2018-05-24-at-4.36.42-PM.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Image-2018-05-24-at-4.36.42-PM.png 1000w, https://slindev.com/content/images/2023/10/Image-2018-05-24-at-4.36.42-PM.png 1560w" sizes="(min-width: 720px) 720px"></figure><p>While the Vulkan rendering was starting to work decently what I really wanted was making it work on the GO.</p><p>First I tried the Vulkan samples for android on the GO. They just worked which was a good start, but because there was no support for the Oculus SDK it didn&apos;t do anything VR and was just using the GO as a single screen and not using any of the sensor data. This gave me some hope on finding a workaround so I started porting the engine to android. It took a day or two until it was all kinda working. But still no VR.</p><p>Looking at the available Vulkan extensions it turned out that there was some functionality to share memory to OpenGL ES 3.0. I did get this to render into an OpenGL surface, but the Oculus SDK wants to handle the swap chain textures for me which somehow ended up complicating things more than I wanted to.</p><p>Around the same time or maybe earlier, Oculus released experimental Vulkan support for Unreal on GO. Turns out it was part of their latest SDK release at the time but they didn&apos;t include any headers yet. After a day of scraping the symbols for it and trying to turn it into something that would work, I gave up and contacted their developer support. As a member of their Oculus Start developer program I am entitled to special developer support and apparently these support tickets are very useful. A day later they got me in contact with the developer who was actually implementing the Vulkan support and I got all the headers and information I needed (which became part of one of their SDK releases a month or two later).</p><p>Their Vulkan support seems to be using the same shared memory technique I was trying to use, but as it&apos;s implemented on their end it allows them to copy less data around then I would have had to do. Anyway the result was fully working Vulkan support on the GO! BUT performance with MSAA was very bad and there is no support for fixed foveated rendering with Vulkan. Also no debugging tools have been working for Vulkan at that time. I decided to just accept those problems and carry on with what I had.</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://slindev.com/content/media/2023/10/device-2018-06-23-035423_thumb.jpg" data-kg-custom-thumbnail>
            <div class="kg-video-container">
                <video src="https://slindev.com/content/media/2023/10/device-2018-06-23-035423.mp4" poster="https://img.spacergif.org/v1/2560x1440/0a/spacer.png" width="2560" height="1440" playsinline preload="metadata" style="background: transparent url(&apos;https://slindev.com/content/media/2023/10/device-2018-06-23-035423_thumb.jpg&apos;) 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon">
                        <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:39</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate">1&#xD7;</button>
                        <button class="kg-video-unmute-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><p>I created a first set of levels and had some friends and family try it, but it turned out to be too hard and frustrating, so I came up with even easier first levels and only very slowly making it more complicated. Not being good at level building slowed my efforts down a lot. I considered making a simple editor for these levels a couple of times but always decided to just go with blender as my requirements were very low. I am definitely regretting this decision now as it would have improved my productivity a lot and gotten me more and better levels in the end. Maybe even an editor to include in the game for users to create even more levels. Anyway, my motivation to create more levels was very low (and still is).</p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2018-07-31-um-00.01.33.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2018-07-31-um-00.01.33.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2018-07-31-um-00.01.33.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2018-07-31-um-00.01.33.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2018-07-31-um-00.01.33.png 2144w" sizes="(min-width: 720px) 720px"></figure><p>A month or two later I went to Oculus Connect 5, the Oculus developer conference to spectate Echo Arena, talk to some Oculus developer support people and watch some talks. I scheduled a support session (and included the things I wanted to talk about), but when I went there most of them were gone and nobody specific to my problems was around (mostly MSAA, but also some other things). The guy I talked to did write down my email and said he&apos;d get back with some answers (but never did). Fortunately when I was just about to leave someone else came in who had some ideas about why my MSAA was being slow (I reused a quite big render target that could fit both eyes and while it didn&apos;t seem like it was fully resolved twice, he said it there may be an unresolve pass before the second eye renders which can be very slow. Anyway I eventually split it into two eyes, which improved performance a lot) and were to find someone else that could help me who was just doing a talk. I went to see most of that talk and then talked to him. He mostly just confirmed things I already new, but maybe he remembered me when they made RenderDoc work with Vulkan on GO earlier this year.</p><p>When a friend asked me about Linux, I updated the Linux support Rayne used to have and also tried to make the game work with SteamVR on linux, but it kept causing a random crash, so eventually I gave up (may get back to it in the future). While very slowly working on the first 10 of the 15-20 levels I wanted to have, I also started updating a UI module that Sidney started a few years ago and made it work with in game textures. It&apos;s using Skia, which is a UI library from google, which was or still is used on android and for Chrome some way and it has a CPU, OpenGL and Vulkan backend for the rendering. It&apos;s somewhat low level, but has basic text rendering and can do all kind of shapes and images. For simplicity I am using it&apos;s CPU rendering, which after some performance issues with transfering the data to the GPU has been fast enough for what I need. I initially wanted to implement my own fully hardware accelerated vector rendering, but just getting the data prepared for it seemed like a lot of work.</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://slindev.com/content/media/2023/10/2018-11-04-17-12-32_thumb.jpg" data-kg-custom-thumbnail>
            <div class="kg-video-container">
                <video src="https://slindev.com/content/media/2023/10/2018-11-04-17-12-32.mp4" poster="https://img.spacergif.org/v1/960x540/0a/spacer.png" width="960" height="540" playsinline preload="metadata" style="background: transparent url(&apos;https://slindev.com/content/media/2023/10/2018-11-04-17-12-32_thumb.jpg&apos;) 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon">
                        <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:38</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate">1&#xD7;</button>
                        <button class="kg-video-unmute-icon">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide">
                            <svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><p>The levels were still the thing holding me back and I spent some time on prototyping other projects: </p><ul><li>A multiplayer boxing game with a multiplayer audience that could walk and talk outside of the ring and sign up for matches which would then teleport them into the ring when ready. It&apos;s ugly, very limited, but a decent proof of concept based on the code I already had for Swords.</li><li>An asymmetrical multiplayer horror game for VR, which I intend to spend more time on after finishing Project Z.</li></ul><p>But because I really want to eventually finish Project Z and release it mostly for GO, but also for other VR headsets before really working on something new, I started posting requests for someone to help me create good looking levels for the game based on the levels I already had. Eventually I found someone who agreed to make them for the low price I was willing to pay. He eventually gave up but gave me a very high quality base to work with, but I ended up putting most of it together myself and baking the lights and everything. The actual modeling and texturing work he did do was very good.</p><p>Now I am stuck with levels again. I did finish putting together those first 10 levels I had and decided to only do 4 more levels which I also put together already but with some flaws I want to fix. Which is again slowing me down as I am too much of perfectionist apparently... Not in terms of making the game perfect, but in terms of fixing overlapping polygons, small holes in the geometry and getting the shading just right without too many artifacts.</p><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-06-13-um-12.14.02.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2019-06-13-um-12.14.02.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2019-06-13-um-12.14.02.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2019-06-13-um-12.14.02.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-06-13-um-12.14.02.png 2144w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-07-26-um-21.50.43.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2019-07-26-um-21.50.43.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2019-07-26-um-21.50.43.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2019-07-26-um-21.50.43.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-07-26-um-21.50.43.png 2144w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-06-05-um-00.09.16.png" class="kg-image" alt="Project Z - Day 0 to 400 (Recap)" loading="lazy" width="2000" height="1257" srcset="https://slindev.com/content/images/size/w600/2023/10/Bildschirmfoto-2019-06-05-um-00.09.16.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Bildschirmfoto-2019-06-05-um-00.09.16.png 1000w, https://slindev.com/content/images/size/w1600/2023/10/Bildschirmfoto-2019-06-05-um-00.09.16.png 1600w, https://slindev.com/content/images/2023/10/Bildschirmfoto-2019-06-05-um-00.09.16.png 2144w" sizes="(min-width: 720px) 720px"></figure><p>And some more information about the game itself:</p><ul><li>There are 4 different Zombie types: A melee zombie, a ranged attack zombie, a zombie that can break through doors and a zombie that attacks other zombies as well as the player and also gets attacked by them if too close.</li><li>There will be 14 levels, most very short, some bigger and harder though.</li><li>I do intend to change the Zombie model and recently implemented the animation system I had in old Rayne into the new Rayne. As a result zombies can be animated now.</li><li>Total play time for a first playthrough is probably gonna be a bit more than an hour (I can do it in 10-15mins).</li><li>Zombies may or may not end up glowing in the end, it doesn&apos;t look great but it tells the player where they are and makes the game less frustrating.</li><li>There are some quite useless stats shown in the menu for each level such as total number of deaths and fastest time (all for local player only, stored on device).</li><li>I intend to sell the game for 4.99&#x20AC; on all platforms.</li><li>It&apos;s currently not taking advantage of positionally tracked controllers as grabbing door handles and pulling/pushing them takes a lot longer than just getting close to a door and pressing a button and would make the game a lot harder.</li></ul><p>I hope to get it released before halloween.</p>]]></content:encoded></item><item><title><![CDATA[Ghost]]></title><description><![CDATA[<p>Or that&apos;s what my blog probably looks like... Anyway, I thought it&apos;s time for a change and switched my blogging software away from Anchor to Ghost. I also switched from Apache to Nginx and and from my own ugly mailserver setup to a more automated one</p>]]></description><link>https://slindev.com/ghost-post/</link><guid isPermaLink="false">5d48c19ef8c1d125a7dcbcc5</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Tue, 06 Aug 2019 00:07:41 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/Bildschirmfoto-2019-08-06-um-11.00.12.png" medium="image"/><content:encoded><![CDATA[<img src="https://slindev.com/content/images/2019/08/Bildschirmfoto-2019-08-06-um-11.00.12.png" alt="Ghost"><p>Or that&apos;s what my blog probably looks like... Anyway, I thought it&apos;s time for a change and switched my blogging software away from Anchor to Ghost. I also switched from Apache to Nginx and and from my own ugly mailserver setup to a more automated one using mail in a box. Everything is still hosted on Linode as it has been for a while.</p><p>There was no real reason for this change and no major challenges in setting it up because Linode has a big library of good How To articles for almost everything. It took a bit longer than a day with most of the time spent on getting my old web projects Lilly rettet Weihnachten, Glow and some other thing I once did with some friends, which is all required some minor changes or mysql or php to work.</p><p>And just like the last time I updated my web setup, maybe I&apos;ll be writing more again. I&apos;ve definitely been working on some things and it shouldn&apos;t be hard to find something worth writing about.</p>]]></content:encoded></item><item><title><![CDATA[Echo Arena]]></title><description><![CDATA[<p>What can I say... after writing my previous blog post about Project Swords I started downloading the open beta of the VR game Echo Arena and couldn&apos;t stop playing since.</p>
<p>In Echo Arena two teams try to get hold of a disc and to score into the opponent</p>]]></description><link>https://slindev.com/echo-arena/</link><guid isPermaLink="false">5d47705fb2992b6818b1a397</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Mon, 16 Oct 2017 13:30:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/echo-arena-featured-image-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://slindev.com/content/images/2019/08/echo-arena-featured-image-1.jpg" alt="Echo Arena"><p>What can I say... after writing my previous blog post about Project Swords I started downloading the open beta of the VR game Echo Arena and couldn&apos;t stop playing since.</p>
<p>In Echo Arena two teams try to get hold of a disc and to score into the opponent teams goal in 0g. The players move around with small boosters attached to their wrists, a bigger booster attached to their head and by grabbing on to surfaces and pushing themselves off them. The maximum speed for each of those movements is capped to a quite low maximum but can be raised by holding on and launching off of other players.</p>
<div class="light-video-player" data-service="youtube" data-id="5xPRIocr8ts"></div>
<p>Even though I am quite prone to motion sickness this games fast and fluid locomotion system felt good to me from the start. It&apos;s a ton of fun and I am actually really good at it.<br>
The game eventually got released while still lacking important features that have been and still are being added to the game such as playing in fixed parties and spectator mode. There are also sometimes server issues and random crashes, but most of the time it works and it&apos;s great.</p>
<p>Echo Arena is the second VR game (next to Unspoken) getting pushed to become a competitive esports game. To do this Oculus works together with ESL and Intel to organize competitions with a big price pool they call the VR Challenger League. Teams in North America and Europe can compete in weekly cups for 50&#x20AC; price money per person each week and points to qualify for offline events. The first of those offline events was the North American regional tournament at Oculus Connect 4 and the next one will be the EU regionals held at ESL One in Hamburg. The best teams of both regions will eventually meet in the world cup held in Poland as part of IEM Katowice. The winning team during regionals wins 3.5k&#x20AC; per person and it&apos;s about twice that for the world cup. Considering that it&apos;s a new game with only few teams competing and that they are paying for the trips and hotels for the qualifying teams, this is really nice.</p>
<p>In Europe there are only 7 actively competing teams at the moment and I am in one of those called &quot;Jacks&quot; based on the character from the game Lone Echo, which is the single player part of Echo Arena and the player character is essentially the same robot everyone plays in Echo Arena and he is called Jack.<br>
We keep having one of us not able to compete and have to find substitute players, but we still managed to win the first 3 cups, skipped the fourth completely, played well in two more and won the latest one again. As full team we haven&apos;t been defeated yet, qualified for the regionals and should have pretty good chances to go to Poland early next year. We are currently placed second in the EU league and will hopefully get back be the best team in points :).</p>
<p>Here is us playing against our biggest opponent &quot;Ding!&quot;:</p>
<div class="light-video-player" data-service="youtube" data-id="ZW_ETM3esw4"></div>
<p>There are videos from all the previous cups on our youtube channel: <a href="https://www.youtube.com/channel/UCS2y476rtK33oKc_aRG5ZXA">www.youtube.com/channel/UCS2y476rtK33oKc_aRG5ZXA</a></p>
<p>If you own a Rift and haven&apos;t tried Echo Arena yet you should because this is the best VR has to offer right now and it is amazing.</p>
<p>So what happened to Project Swords?<br>
I didn&apos;t directly work on it for a while, but I didn&apos;t give up either and the engine got some massive improvements in the last months :). I&apos;ll hopefully post about that soon.</p>
]]></content:encoded></item><item><title><![CDATA[Project Swords - Devlog 3 - Networking Basics]]></title><description><![CDATA[<p>After spending way more time than expected on getting microphone input and output to work correctly and fixing a bug in libsoundio along the way and a week at WWDC, I finally got to start implementing multiplayer.</p>
<p>Here is a video of what I have so far:</p>
<div class="light-video-player" data-service="youtube" data-id="rA72WGMpnxk"></div>
<h3 id="useful-libraries">Useful Libraries</h3>
<p>For</p>]]></description><link>https://slindev.com/project-swords-devlog-3-networking-basics/</link><guid isPermaLink="false">5d477012b2992b6818b1a38c</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Sun, 02 Jul 2017 13:57:00 GMT</pubDate><content:encoded><![CDATA[<p>After spending way more time than expected on getting microphone input and output to work correctly and fixing a bug in libsoundio along the way and a week at WWDC, I finally got to start implementing multiplayer.</p>
<p>Here is a video of what I have so far:</p>
<div class="light-video-player" data-service="youtube" data-id="rA72WGMpnxk"></div>
<h3 id="useful-libraries">Useful Libraries</h3>
<p>For networking in previous projects I used <a href="http://enet.bespin.org">ENet</a> and because it always worked great, I am using it again :). ENet is a lightweight multiplatform networking library on top of UDP that provides an easy to use interface to connect peers and send data around. It takes care of managing packet order and bandwidth and even offers reliable packets when needed. It really just works.</p>
<p>As the data that is sent around needs to come from somewhere and is ideally as compact as possible while being easy to read and write, I am also using <a href="https://developers.google.com/protocol-buffers/">protobuf</a>. It consists of a data structure description language, a compiler to turn those descriptions into structures in a language of your choice and a library to serialize and deserialize those structures to and from compact binary blobs, but also other formats.</p>
<h3 id="general-architecture">General Architecture</h3>
<p>As my endgoal is a competetive multiplayer game cheating hopefully becomes a thing and I will need ways to prevent it as much as possible. So the first choice I made is to use a server authorative client server architecture. This means that there will always be one server instance all clients connect to and this server instance verifies everything it gets from the clients and sends back corrected data if there is something off.</p>
<p>Quite common for multiplayer games is to have the clients send their user input as it doesn&apos;t change that often, but in my case most input is absolute data from head and hand tracking. Verifying that on the server side is more complicated, but I guess a first verification could be that the hands can only have a maximum distance from the head and the head position can&apos;t change faster than at a certain speed. Also obstacles need to be taken in account and I am sure a lot more can be done. But to get going I currently just trust the clients to send good data and have the server pass it on to all clients. I will probably write another devlog about this topic once I get there.</p>
<p>My longterm goal will be to provide dedicated servers, but as those are quite expensive, I am starting out with one of the players being the server (which obviously allows that player to cheat somewhat easily) and maybe implement a mechanism for players to automatically reconnect to a new host if the previous one disconnected.</p>
<h3 id="connection-handling">Connection Handling</h3>
<p>When a client wants to connect to the server, ENet takes care of notifying both and exchanging some initial data. After that the server automatically sends pings to it&apos;s clients which are needed internally by ENet. If a ping is not responded to for a while the server detects it as a disconnect. Of course a client can also gracefully disconnect, which will send a disconnect message to the server and then the server will tell the client that it&apos;s fine to close the connection now and everything is great, but obviously only works if the client knows it wants to disconnect and has enough time for this to happen. This all is part of ENet and mostly works automatically.</p>
<p>If I were to only send reliable packets around, the disconnect timeout would also work on the client if the server went down, but with unreliable packets that don&apos;t need an acknowledgement to be sent back the client will never know if the server is still there or not. To solve this I implemented my own timeout which gets reset every time the client receives a packet. If it doesn&apos;t for a while it assumes that it disconnected.</p>
<p>To let other clients know about a new player joining or disconnecting the server will broadcast connect and disconnect messages to all clients and to be sure they actually get them, these are sent as reliable packets. An important part about the connect messages is that they also include a unique client id assigned by the server and used to identify which client a message belongs to. ENet actually has it&apos;s own concept of client IDs, but I didn&apos;t find much information on it and in the end it seemed more flexible to just generate my own. I am currently generating them starting at 1 (0 is the server) counting up and reusing the ones of disconnected clients (not entirely sure why I do that, it just seemed nicer...).</p>
<p>While the clients take care of writing their own id into messages, the server uses it&apos;s own internal mapping (using the enet peers data pointer) and overwrites those id&apos;s before sending the messages on to the clients, to prevent one client posing as a different one.</p>
<h3 id="data-structure">Data Structure</h3>
<p>The data I send around is still very much WIP and will probably change a lot over time, but the general idea right now is to have a super &quot;message&quot; (which is what protobuf calls it&apos;s objects) that can either contain information about a client connecting or disconnecting, a players current state, which currently consists of head and hand position and orientation or speech data, but more on speech in another devlog.<br>
For this I use protobufs &quot;oneof&quot; label and a couple of custom messages:</p>
<pre><code>message Packet
{
	oneof content
	{
		Connection connection = 1;
		PlayerState playerState = 2;
		Speech speech = 3;
	}
}
</code></pre>
<p>Those other types look like this:</p>
<pre><code>message Connection
{
	enum State
	{
		CONNECTED = 0;
		DISCONNECTED = 1;
		REFUSED = 2;
	}

	uint32 id = 1;
	State state = 2;
	string message = 3;
}

message PlayerState
{
	uint32 id = 1;
	Head head = 2;
	Hand leftHand = 3;
	Hand rightHand = 4;
}

message Speech
{
	uint32 id = 1;
	bytes data = 2;
}
</code></pre>
<p>Where the Hand and Head types both only contain a vector for position and quaternion for orientation. I already use two different messages though, as Hands for example will most probably get some additional data for finger tracking in the future.</p>
<h3 id="when-to-send-packets">When to send Packets</h3>
<p>The easiest would be to send a new position every frame, but framerates can vary and more importantly at 90 fps much more packes are sent than usually needed. I solved this by implementing a timer and having clients send messages about 50ms appart. I am thinking about dynamically adjusting this based on movement speed and distance to other players. As this is supposed to become a sword fighting game, precise hand movements are going to be somewhat important and 50ms might not be good enough.</p>
<p>The time it takes for messages to make it to and from other clients is probably going to be a way bigger problem though.</p>
<h3 id="synchronizing-players">Synchronizing Players</h3>
<p>I am currently going for some standard techniques (as for example described <a href="https://developer.valvesoftware.com/wiki/Latency_Compensating_Methods_in_Client/Server_In-game_Protocol_Design_and_Optimization">here</a>) which I can built upon when needed. This means that the player is fully simulated on the client for smooth movement and only corrected if the server sends back something different. Due to lag between sending the message to the server and getting the corrected response, I&apos;ll have to include a packet identifier and only correct if the player position at the time it sent it was wrong, because the current position will most likely always differ. I don&apos;t have this yet though.</p>
<p>I started out by only moving other players when their message was received, but the result is not exactly smooth. I improved this by storing their previous position and orientation the new one and then interpolating between those over time until a new message is received. It still doesn&apos;t look great sometimes, but it is already a big improvement.<br>
The main problem is that the other player I see in my game is way behind the real player playing. I might need some more advanced prediction for this in the future, but those might end up being wrong resulting in much worse problems.</p>
<h3 id="abstraction-in-the-game">Abstraction in the Game</h3>
<p>I often read that it is very hard to turn a singleplayer game into multiplayer and as it turns out this is absolutely true. Fortunately I am just getting started and have a chance to already do some useful abstractions without having to change everything.</p>
<p>An instance of the game can either have a client or a server object that handles incoming and outgoing packets. In both cases there can be a local player that either directly passes it&apos;s packets on to the server to broadcast or in case of it being on client, will send them to the server and once it&apos;s implemented will also take care of corrected incoming data. Then there is also a player class which will be instanciated for every unknown clientID and be updated with new data when ever it is received. I might just have this class do a full movement simulation with physics on all clients and pass the result on the server on as the corrected data, but maybe there will be a different player version on the server in the future.</p>
<h3 id="testing">Testing</h3>
<p>Most of my testing so far is just several instances on the same PC, so right now lag is almost not existent. I did already try it on my local network too, but obviously that isn&apos;t much better. But it&apos;s still too early to worry about lag and dropped packets and such anyway :).</p>
]]></content:encoded></item><item><title><![CDATA[Project Swords - Devlog 2 - Audio Playback]]></title><description><![CDATA[<h3 id="the-goal">The Goal</h3>
<p>I want a fast and lightweight library handling 3D audio sources in a game environment as realistic as possible. This includes occlusion, realistic attenuation and delay based on the distance as well as all kind of indirect effects of sound reflecting from the level geometry.<br>
It should also</p>]]></description><link>https://slindev.com/project-swords-devlog-2-audio-playback/</link><guid isPermaLink="false">5d476fb1b2992b6818b1a383</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Sat, 20 May 2017 22:15:00 GMT</pubDate><content:encoded><![CDATA[<h3 id="the-goal">The Goal</h3>
<p>I want a fast and lightweight library handling 3D audio sources in a game environment as realistic as possible. This includes occlusion, realistic attenuation and delay based on the distance as well as all kind of indirect effects of sound reflecting from the level geometry.<br>
It should also be able to use a HRTF when using headphones and do correct panning for different speaker layout.<br>
But I also want to be able to somewhat correctly play stereo music or sound effects that don&apos;t have a position in the game world.<br>
And of course it should work on all the big platforms.</p>
<h3 id="the-video">The Video</h3>
<p>Here is video of what it currently looks and sounds like. (There some sound artifacts in the video, I didn&apos;t notice them while recording, so I am not entirely sure where they come from...)</p>
<div class="light-video-player" data-service="youtube" data-id="SNYHkOf6AX0"></div>
<h3 id="the-libraries">The Libraries</h3>
<p>Turns out that there is the still very new <a href="https://valvesoftware.github.io/steam-audio/">Steam Audio</a> with a very permissive licence which can fullfill most of those requirements for 3D audio sources. But it is very specialized and still requires the audio to come from somewhwere and go somewhere and can&apos;t even do attenuation, besides providing a factor to use.<br>
Fortunately I already have a working ogg file loader that just loads and decodes ogg files into RAM and could in the future also be tweaked to stream big files from the hdd. It mainly just uses the <a href="https://xiph.org/vorbis/">ogg vorbis</a> reference implementation.<br>
This still leaves the output and there are many different solutions. I could just implement my own for the different platforms, but that seemed like a lot of work, especially considering that I found three different open source projects fullfilling my requirements: <a href="http://www.portaudio.com/">PortAudio</a>, <a href="https://www.music.mcgill.ca/~gary/rtaudio/">RtAudio</a> and <a href="http://libsound.io/">libsoundio</a>. While all three are probably acceptable options, I somewhat randomly picked libsoundio.</p>
<h3 id="the-oculus-rift">The Oculus Rift</h3>
<p>My main goal is to do realistic audio in VR for headphones and the Oculus Rift has integrated ones. It is possible to set different output devices in the Oculus settings and the SDK has functionality to get the preferred audio device. But only as the device GUID.<br>
Fortunately it turned out that libsoundio also has an ID for each device which on Windows happens to be the same as the device GUID returned by the Oculus SDK.<br>
My solution is to enumerate all audio devices with libsoundio and just use the first one with a matching ID.<br>
On my system most IDs exist twice in that list, but the first one also happens to be correct. Since the second one happens to be the &quot;raw&quot; device, I could probably just ignore those completely for my use case.</p>
<h3 id="the-ambisonics">The Ambisonics</h3>
<p>Looking at the Steam Audio documentation, going ambisonics all the way seems to be the best compromise between speed and quality. The convolution filter that is applied to all sources for the indirect sound effects already returns it&apos;s results encoded in ambisonics and mixed for all sources. And while Steam Audio takes care of all the bells and whistles for those indirect effects given an environment, all it does for direct audio is to calculate a couple of parameters to use to render the sound source correctly. BUT there is also a pannining effect that can encode such a direct sound source into ambisonics.</p>
<h3 id="the-pipeline">The Pipeline</h3>
<p>My audio pipeline consists of an asset representing the data from an audio file, a sampler that given a time, channel and asset will return a sample from that asset, currently by doing a linear interpolation.</p>
<p>Then there is the audio source, which has a sampler and an asset, a position, gain, radius (it&apos;s only used for Steam Audios volumetric occlusion effect and has nothing to do with the range, which is infinite, with loudness depending on the gain value) and a pitch property.<br>
It internally keeps track of the current playback time (and progresses it with every audio frame). The audio source also has a method that is called per frame and feeds the samples for the complete frame to it&apos;s steam audio effects and returns the resulting direct audio as ambisonics data.<br>
Audio sources only play a single channel of an asset.</p>
<p>For audio that has no position, such as background music and maybe some effects, there is the audio player object. Which is very similar to a source, but without all the effects and position and instead mixes into the final audio. I wanted to use iplConvertAudioBufferFormat to convert an input format to the desired output layout, but it doesn&apos;t work for all kind of combination and requires deinterleaved data while all my data other is interleaved, which complicated things. Instead I am now just doing the conversation myself while only supporting mono and stereo source material and maybe extending it in the future.</p>
<p>All sources and players are combined in an audio world, which takes care of initializing libsoundio and Steam Audio and does the final mixing. The world also handles the geometry data.<br>
And because all the positional audio needs to know where the listener is, the world has a listener property which can be any scene node.</p>
<h3 id="the-indirect-audio">The Indirect Audio</h3>
<p>Each audio source can produce indirect audio using the Steam Audio convolution effect. It just needs to be created and provided with the new audio data every audio frame.<br>
The audio data is just buffer I create by sampling the asset, multiplying each sample with the gain property and by applying the pitch property as a multiplicator on the time delta used to progress the sources internal time.<br>
Everything else is handled by Steam Audio.</p>
<h3 id="the-direct-audio">The Direct Audio</h3>
<p>For the direct audio there is the function iplGetDirectSoundPath which given some information about the listener and the sound position, will return a direction, an attenuation factor, an occlusion factor and the time it takes the sound to travel from the source to the listener (and also air absorption values, which happen to not be supported and are always 0).</p>
<p>Because I want to output the resulting audio encoded as ambisonics, I am using the panning effect. Just like the convolution effect, the panning effect takes the audio for the current audio frame but it also takes a direction (the one returned for the direct sound path) and will then immediately output the result into an output buffer.<br>
The gain and pitch are used just like before and the attenuation and occlusion can just be multiplied with the sample.</p>
<p>I haven&apos;t tried it without, but in theory the propagation delay should be somewhat important in combination with the indirect audio. The change in delay between samples due to the sound source and/or listener moving should also be causing the <a href="https://en.wikipedia.org/wiki/Doppler_effect">doppler effect</a>.<br>
My solution is to calculate a delay per sample by incrementing the previous frames delay by the change in delay per sample ((new_delay-old_delay)/sample_count) for every sample, so that the last sample will have a delay corresponding to the new frames delay. This delay per sample is then used as a negative offset for the lookup time for each sample.<br>
It works quite well, but could probably be improved by smoothing the change in delay and maybe reaching the frames target delay for the sample in the center of the frame, but this would also introduce new issues.</p>
<p>The final ambisonics buffer is then passed on to the audio world.</p>
<h3 id="the-mixing">The Mixing</h3>
<p>The audio world loops over all sources and mixes their direct audio output buffers using iplMixAudioBuffers and mixes the result with the already mixed indirect sound returned by iplGetMixedEnvironmentalAudio.<br>
The ambisonics binaural or ambisonics panning effect is then used depending on the target speaker layout to decode the final ambisonics buffer to either two headphone channels using an HRTF or any other speaker layout.<br>
The result is then mixed with the transformed audio player buffers and passed to the system as the final audio.</p>
<h3 id="the-geometry">The Geometry</h3>
<p>To use the indirect audio features, Steam Audio needs geometry data. This is created as a scene object which is used to build an environment, which is used to build an environment renderer, which is then used to create the convolution effect (and also to calculate the direct sound path! The scene can be null in this case though...).</p>
<p>I implemented a mechanism that allows to add materials and meshes with material id, position and rotation to the audio world. Calling an update method will then recreate all scene dependent steam audio objects using the new materials and geometry. There was nothing complicated about this and it just works.</p>
<h3 id="the-other-things">The Other Things</h3>
<p>Most additional effects on the per source audio could probably be added with some effect pipeline as part of the sampler.</p>
<p>Most of the per frame memory blocks can be reused between sources.</p>
<p>The linear interpolation to sample between samples does not appear to do such a great job. It could also be the source audio, but some frequencies appear to sound a bit dirty.</p>
<p>The current Steam Audio release contains a static library, but has additional dependencies and will be dropped in the future. The windows dll on the other hand is 50mb and thus adds massively to the size of any project using it. At least it compresses pretty well to about 15mb...</p>
<p>All Steam Audio effects for the source have internal state and thus, one per source should be used.</p>
<p>I am not using any of the serialization and audio source baking functionality yet, but both seems like a good idea and shouldn&apos;t be much more than calling another Steam Audio function (baking will require propes to be placed in the level though).</p>
<p>The center frequencies for the three frequency bands used for Steam Audio materials are 800 Hz, 4 KHz, and 15 KHz.</p>
<p>The reverb effect could be used for audio sources without convolution effect to fake a similar effect. The convolution effect should be used sparely due to the CPU overhead it will add.</p>
<p>The direct audio occlusion with mainly just a flat ground plane and a wall blocking the audio will make it impossible to hear the source. This feels quite wrong as in reality there are just soo many small surfaces reflecting the sound everywhere. There might be ways to solve this, but it could get very tricky.</p>
<p>Finding good settings for the raycasting turned out to be a bit tricky. Especially a low number of rays seems to cause artifacts. An irDuration of 2.0 turned out way too slow, while everything is great when set to 1.0, somewhat independ of all other settings...<br>
I also noticed that when using an ambisonics order higher than three something is seriously wrong with the output (the source seems to be behind me while it is in front of me and other things).</p>
<h3 id="the-end">The End</h3>
<p>While not everything is perfect, the resulting audio is quite convincing and just works without tweaking each source independently to have the right sound based on it&apos;s environment. Also I learned a few things about audio :).</p>
]]></content:encoded></item><item><title><![CDATA[Project Swords - Devlog 1 - The Masterplan and Physics Problems]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="blobbytennis">Blobby Tennis</h3>
<p>I just launched <em><strong>Blobby Tennis</strong></em> on <a href="http://store.steampowered.com/app/628530">Steam</a> and the <a href="https://www.oculus.com/experiences/rift/1449841275090025/">Oculus Store</a> as a very small, but free VR experience. Turns out that getting anything VR released on those two platform is currently very easy, as all it took was submitting it to the Oculus Store using their developer</p>]]></description><link>https://slindev.com/project-swords-devlog-1-the-masterplan-and-physics-problems/</link><guid isPermaLink="false">5d476f2eb2992b6818b1a377</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Mon, 15 May 2017 21:49:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/Swords1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="blobbytennis">Blobby Tennis</h3>
<img src="https://slindev.com/content/images/2019/08/Swords1.png" alt="Project Swords - Devlog 1 - The Masterplan and Physics Problems"><p>I just launched <em><strong>Blobby Tennis</strong></em> on <a href="http://store.steampowered.com/app/628530">Steam</a> and the <a href="https://www.oculus.com/experiences/rift/1449841275090025/">Oculus Store</a> as a very small, but free VR experience. Turns out that getting anything VR released on those two platform is currently very easy, as all it took was submitting it to the Oculus Store using their developer dashboard, while I had to send an email to some Valve address and got a developer invitation in return.</p>
<p>While this was and still is very exciting, it is too early to draw any conclusions or anything, but I might write some more about it in the future.</p>
<h3 id="masterplan">Masterplan</h3>
<p>But <em><strong>Blobby Tennis</strong></em> was really just a short stop on a longer plan of me making VR games. What I have now is a somewhat well working game engine with a DirectX 12 renderer and VR support, but a still limited feature set (no animations, no particles, only very hacky post processing and probably a lot more...).</p>
<p>My next and much more ambitious project is a VR multiplayer sword fighting game. I don&apos;t expect to be able to turn this into a realistic sword fighting simulator, but I want to make the most out of what&apos;s possible without hard haptic feedback.<br>
The amazing working title is &quot;Swords&quot;!!1111<br>
The great thing about this is that I can easily get a first prototype working with only few assets and keep iterating and expanding it for as long as I want to hopefully turn it into an enjoyable game.</p>
<p>And while doing all this I&apos;ll try to write a new devlog every now and then about my progress and problems I encountered along the way.<br>
And then the game is released, highly successful, I can start working on something new and pay other people to keep this game going and that is pretty much the plan :D</p>
<p>Oh and also I&apos;d like make the game accessible to others while still in development and at some point begin charging a small amount of money for it, when it gets closer to completion.<br>
I am not at a point where I got something to release, but I hope I will be by maybe the end of july.</p>
<p>For a first public build of the game I want to have some kind of moving around together in multiplayer with basic ingame voice chat.</p>
<h3 id="physicsproblems">Physics Problems</h3>
<p>I am using <a href="http://bulletphysics.org/">bullet physics</a> for the physics simulation, mainly because I have been using it in many projects before and while having lots of issues it always kinda worked in the end.</p>
<p>One big problem I already had to solve for <em><strong>Blobby Tennis</strong></em> was attaching the racket to a hand. It turned out that using a constraint was somewhat unstable and especially caused some way too random behaviour when hitting the ball. Sometimes the ball was accellerated far too little and sometimes it just flow out of the map.</p>
<p>For some reason the solution was to directly set the acceleration of the racket to stay at the hand position.<br>
Which in my code looks like this:</p>
<pre><code>RN::Vector3 speed = GetWorldPosition() - _racketBody-&gt;GetWorldPosition();
speed /= delta;
RN::Quaternion rotationSpeed = GetWorldRotation()*_racketBody-&gt;GetWorldRotation().GetConjugated();
RN::Vector4 axisAngleSpeed = rotationSpeed.GetAxisAngle();
if(axisAngleSpeed.w &gt; 180.0f)
   axisAngleSpeed.w -= 360.0f;
RN::Vector3 angularVelocity(axisAngleSpeed.x, axisAngleSpeed.y, axisAngleSpeed.z);
angularVelocity *= axisAngleSpeed.w*M_PI;
angularVelocity /= 180.0f;
angularVelocity /= delta;

_racketBody-&gt;SetLinearVelocity(speed);
_racketBody-&gt;SetAngularVelocity(angularVelocity);
</code></pre>
<p>While I represented the racket collider as a compound of just two cylinders to approximate the rackets head, I want objects such as swords and shields to have an accurate physics representation. Bullet offers basicly two solution for this:</p>
<ul>
<li>btGImpactMeshShape - Can handle concave and even changing meshes, but it is supposed to be slow and somewhat unstable.</li>
<li>Convex decomposition and then using either btConvexHullShape or btConvexTriangleMeshShape - A compound of butConvexHullShape objects seems to be the recommended way to do it.</li>
</ul>
<p>I tried all of them and the btGImpactMeshShape didn&apos;t want to collide with my ground plane for some reason, so I didn&apos;t really bother with it any further.<br>
There is some utility code in bullet to do a convex decomposition of arbitrary meshes, but doing this at runtime every time is very slow and also the algorithm used is somewhat outdated. Fortunately there is <a href="http://kmamou.blogspot.de/2014/11/v-hacd-v20-is-here.html">V-HACD V2.0</a> including a well working blender plugin!<br>
The plugin is even nice enough to create a unique material per generated mesh, which allowed me to just merge and export them as a single model into my custom format and can now just load it and generate a shape for each of the meshes.<br>
While all this worked great from the start, it turned out that the resulting collision shape was WAY too big.<br>
It took me a day to find out that bullet applies additional margins to all it&apos;s shapes. It is possible to turn them off, but not recommended, instead the mehes used for the collision shapes are supposed to be scaled smaller in a special way that correctly takes the generation of the margings into account. I didn&apos;t manage to figure this out in blender, but bullet provides a utility function to do so in its btConvexHullComputer class:</p>
<pre><code>btConvexHullComputer *convexHullComputer = new btConvexHullComputer();
btScalar actualMargin = convexHullComputer-&gt;compute(vertices, stride, margin, 0.001f);
</code></pre>
<p>Here &apos;margin&apos; is the preferred margin, &apos;0.001&apos; makes sure that the hull doesn&apos;t get too small and &apos;actualMargin&apos; is the actual margin value used to shrink the mesh. I am then creating the btConvexHullShape using the vertices in &apos;convexHullComputer-&gt;vertices&apos; and setting the shapes margin to &apos;actualMargin&apos;.<br>
While not 100% accurate, the resulting collision meshes are working good enough for my current needs. There are still issues where two objects (sword and shield) can pass through each other, especially at the cracks between two convex hull shapes, if only enough force is applied. With both my objects following a hand, this is still way too easy to accomplish. I hope that adjusting the forces is going to be good enough in the future, but it may not be...</p>
<h3 id="whatsnext">What&apos;s Next</h3>
<p>While not finished with physics any time soon, I want to get networking in soon, but am somehow most excited to start out with the voice chat. For this I first need audio playback. I used OpenAL soft before, but it&apos;s a bit limited in terms of correct audio physics. So I just started implementing a somewhat custom audio solution utilizing <a href="https://valvesoftware.github.io/steam-audio/">Steam Audio</a> and <a href="http://libsound.io/">libsoundio</a>.<br>
I am probably going to use the <a href="http://opus-codec.org/">Opus Audio Codec</a> reference implementation for the voice chat codec and <a href="http://enet.bespin.org/">enet</a> for networking.<br>
I&apos;ll probably write more about all that in future devlogs :).</p>
<p><img src="https://d1czrtm2mp3lak.cloudfront.net/items/291X2D053L3i331q1U1O/Swords1.png" alt="Project Swords - Devlog 1 - The Masterplan and Physics Problems" loading="lazy"></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[GGJ17 and Anchor]]></title><description><![CDATA[<h2 id="switch-to-anchor">Switch to Anchor</h2>
<p>As you may or may not have noticed, I switched the blog software from pants to Anchor. I reposted all posts, so their date is wrong but the content is still the same.<br>
I switched mostly because the few people making pants cool also started switching to</p>]]></description><link>https://slindev.com/ggj17-and-anchor/</link><guid isPermaLink="false">5d476e93b2992b6818b1a36c</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Sun, 12 Mar 2017 23:48:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/sushi4-Kopie.png" medium="image"/><content:encoded><![CDATA[<h2 id="switch-to-anchor">Switch to Anchor</h2>
<img src="https://slindev.com/content/images/2019/08/sushi4-Kopie.png" alt="GGJ17 and Anchor"><p>As you may or may not have noticed, I switched the blog software from pants to Anchor. I reposted all posts, so their date is wrong but the content is still the same.<br>
I switched mostly because the few people making pants cool also started switching to other blog systems. Also I kept getting a message every few months from my host that they had to kill the process, because it used up too much RAM.<br>
Anchor so far seems nice and simple.</p>
<h2 id="ggj17">GGJ17</h2>
<p>I also participated in the Global Game Jam 2017 in January, as usually at InnoGames. This time <a href="http://widerwille.com">Sidney</a> joined me and together with an Artist and a Gamedesigner, we made a Game Boy game for the original Game Boy. We wrote it in assembly and while it took us a while to get going, we are really happy with the result.<br>
We used <a href="https://github.com/rednex/rgbds">RGBDS</a> for the tools to build an actual rom for the game, IntelliJ as IDE and <a href="https://github.com/AntonioND/gbt-player">GBT Player</a> for the music playback.<br>
We used a cartridge reading roms from an SD card to play on a real device.</p>
<p>The game is called Sushi and can be downloaded including the <a href="http://bgb.bircd.org/">Emulator BGB</a> which we also used for testing from the GameJam website: <a href="https://igjam.eu/jams/global-game-jam-2017/331/">Sushi</a></p>
<div class="light-video-player" data-service="youtube" data-id="jhui6_wkOuY"></div>]]></content:encoded></item><item><title><![CDATA[D3D12 Texture Mipmap Generation]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2>
<p>If you start writing a 3D graphics engine, the basics usually consist of loading mesh, texture and shader data and getting it to the GPU in a way that enables the GPU to run the shader with the mesh and texture data as input. To load the texture data</p>]]></description><link>https://slindev.com/d3d12-texture-mipmap-generation/</link><guid isPermaLink="false">5d476deeb2992b6818b1a362</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Sun, 12 Mar 2017 22:41:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/Image-2017-03-13-at-12.28.04-AM.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://slindev.com/content/images/2019/08/Image-2017-03-13-at-12.28.04-AM.png" alt="D3D12 Texture Mipmap Generation"><p>If you start writing a 3D graphics engine, the basics usually consist of loading mesh, texture and shader data and getting it to the GPU in a way that enables the GPU to run the shader with the mesh and texture data as input. To load the texture data in a plattform independent way, easy formats to get started with are TGA, BMP and PNG (with a lot of help from libpng...). Much better for a real game are usually compressed formats such as S3TC that can be decompressed by the GPU while rendering at no performance cost. But at least I have a lot of PNG files laying around I want to test with and I don&apos;t really feel like converting everything until I have a real asset pipeline going.</p>
<p>However, just loading a PNG file and then using it for rendering, results in massive aliasing with a moving camera for all surfaces that are not directly in front of the camera:</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Image-2017-03-10-at-11.10.19-PM.png" class="kg-image" alt="D3D12 Texture Mipmap Generation" loading="lazy" width="1038" height="805" srcset="https://slindev.com/content/images/size/w600/2023/10/Image-2017-03-10-at-11.10.19-PM.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Image-2017-03-10-at-11.10.19-PM.png 1000w, https://slindev.com/content/images/2023/10/Image-2017-03-10-at-11.10.19-PM.png 1038w" sizes="(min-width: 720px) 720px"></figure><p>The solution is of course mipmapping, by using lower resolution versions of the same texture depending on the screen pixels size on the textured surface, the aliasing can be eliminated:</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/Image-2017-03-13-at-12.28.04-AM.png" class="kg-image" alt="D3D12 Texture Mipmap Generation" loading="lazy" width="1038" height="805" srcset="https://slindev.com/content/images/size/w600/2023/10/Image-2017-03-13-at-12.28.04-AM.png 600w, https://slindev.com/content/images/size/w1000/2023/10/Image-2017-03-13-at-12.28.04-AM.png 1000w, https://slindev.com/content/images/2023/10/Image-2017-03-13-at-12.28.04-AM.png 1038w" sizes="(min-width: 720px) 720px"></figure><p>In OpenGL there is a function called glGenerateMipmap, which automatically generates those mipmaps for a given texture. DirectX until DirectX11 has a method called GenerateMips doing the same thing.<br>
Both are using the GPU and are somewhat fast as a result.<br>
Turns out that both new rendering APIs Vulkan and Direct3D12 got rid of this functionality.<br>
The easy solution would be to just generate the mipmaps on the CPU, or well, have an offline process generate them and ideally combine that with a compression format such as S3TC. If you don&apos;t want that, Vulkan has a helpful <a href="https://www.khronos.org/registry/vulkan/specs/1.0/man/html/vkCmdBlitImage.html">vkCmdBlitImage</a> function that copies a source texture to a destination texture by downsampling it. It is very easy to find <a href="https://github.com/SaschaWillems/Vulkan/tree/master/texturemipmapgen">this</a> when researching the topic and while it took me a while to get it running, it kinda just works.</p>
<h2 id="the-code">The Code</h2>
<p>For Direct3D12 on the other hand there is no such blit function and I just found some people talking about Microsofts samples. And yes, after some digging it turned out that the <a href="https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine">MiniEngine</a> has functionality to generate mipmaps using a compute shader. But since I am not building on top of the <a href="https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine">MiniEngine</a> and because it has quite a few layers of abstraction it isn&apos;t very much plug and play...<br>
I copy pasted it all together and somehow made it work. It is quite far away from the <a href="https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine">MiniEngine</a> code and absolutely not the way to do it, but it is something to get started with that should mostly just work, as everything needed is in one place:</p>
<pre><code>//_mipMapTextures is an array containing texture objects that need mipmaps to be generated. It needs a texture resource with mipmaps in D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE state.
//Textures are expected to be POT and in a format supporting unordered access, as well as the D3D12_RESOURCE_FLAG_ALLOW_UNORDERED_ACCESS set during creation.
//_device is the ID3D12Device
//GetNewCommandList() is supposed to return a new command list in recording state
//SubmitCommandList(commandList) is supposed to submit the command list to the command queue
//_mipMapComputeShader is an ID3DBlob of the compiled mipmap compute shader
void D3D12Renderer::CreateMipMaps()
{
	//Union used for shader constants
	struct DWParam
	{
		DWParam(FLOAT f) : Float(f) {}
		DWParam(UINT u) : Uint(u) {}

		void operator= (FLOAT f) { Float = f; }
		void operator= (UINT u) { Uint = u; }

		union
		{
			FLOAT Float;
			UINT Uint;
		};
	};

	//Calculate heap size
	uint32 requiredHeapSize = 0;
	_mipMapTextures-&gt;Enumerate&lt;D3D12Texture&gt;([&amp;](D3D12Texture *texture, size_t index, bool &amp;stop) {
		if(texture-&gt;mipMaps &gt; 1)
			requiredHeapSize += texture-&gt;mipMaps - 1;
	});

	//No heap size, means that there was either no texture or none that requires any mipmaps
	if(requiredHeapSize == 0)
	{
		_mipMapTextures-&gt;RemoveAllObjects();
		return;
	}

	//The compute shader expects 2 floats, the source texture and the destination texture
	CD3DX12_DESCRIPTOR_RANGE srvCbvRanges[2];
	CD3DX12_ROOT_PARAMETER rootParameters[3];
	srvCbvRanges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0);
	srvCbvRanges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 0, 0);
	rootParameters[0].InitAsConstants(2, 0);
	rootParameters[1].InitAsDescriptorTable(1, &amp;srvCbvRanges[0]);
	rootParameters[2].InitAsDescriptorTable(1, &amp;srvCbvRanges[1]);

	//Static sampler used to get the linearly interpolated color for the mipmaps
	D3D12_STATIC_SAMPLER_DESC samplerDesc = {};
	samplerDesc.Filter = D3D12_FILTER_MIN_MAG_LINEAR_MIP_POINT;
	samplerDesc.AddressU = D3D12_TEXTURE_ADDRESS_MODE_CLAMP;
	samplerDesc.AddressV = D3D12_TEXTURE_ADDRESS_MODE_CLAMP;
	samplerDesc.AddressW = D3D12_TEXTURE_ADDRESS_MODE_CLAMP;
	samplerDesc.MipLODBias = 0.0f;
	samplerDesc.ComparisonFunc = D3D12_COMPARISON_FUNC_NEVER;
	samplerDesc.MinLOD = 0.0f;
	samplerDesc.MaxLOD = D3D12_FLOAT32_MAX;
	samplerDesc.MaxAnisotropy = 0;
	samplerDesc.BorderColor = D3D12_STATIC_BORDER_COLOR_OPAQUE_BLACK;
	samplerDesc.ShaderRegister = 0;
	samplerDesc.RegisterSpace = 0;
	samplerDesc.ShaderVisibility = D3D12_SHADER_VISIBILITY_ALL;

	//Create the root signature for the mipmap compute shader from the parameters and sampler above
	ID3DBlob *signature;
	ID3DBlob *error;
	CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc;
	rootSignatureDesc.Init(_countof(rootParameters), rootParameters, 1, &amp;samplerDesc, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
	D3D12SerializeRootSignature(&amp;rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &amp;signature, &amp;error);
	ID3D12RootSignature *mipMapRootSignature;
	_device-&gt;CreateRootSignature(0, signature-&gt;GetBufferPointer(), signature-&gt;GetBufferSize(), IID_PPV_ARGS(&amp;mipMapRootSignature));

	//Create the descriptor heap with layout: source texture - destination texture
	D3D12_DESCRIPTOR_HEAP_DESC heapDesc = {};
	heapDesc.NumDescriptors = 2*requiredHeapSize;
	heapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV;
	heapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE;
	ID3D12DescriptorHeap *descriptorHeap;
	_device-&gt;CreateDescriptorHeap(&amp;heapDesc, IID_PPV_ARGS(&amp;descriptorHeap));
	UINT descriptorSize = _device-&gt;GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);

	//Create pipeline state object for the compute shader using the root signature.
	D3D12_COMPUTE_PIPELINE_STATE_DESC psoDesc = {};
	psoDesc.pRootSignature = mipMapRootSignature;
	psoDesc.CS = { reinterpret_cast&lt;UINT8*&gt;(_mipMapComputeShader-&gt;GetBufferPointer()), _mipMapComputeShader-&gt;GetBufferSize() };
	ID3D12PipelineState *psoMipMaps;
	_device-&gt;CreateComputePipelineState(&amp;psoDesc, IID_PPV_ARGS(&amp;psoMipMaps));


	//Prepare the shader resource view description for the source texture
	D3D12_SHADER_RESOURCE_VIEW_DESC srcTextureSRVDesc = {};
	srcTextureSRVDesc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING;
	srcTextureSRVDesc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D;

	//Prepare the unordered access view description for the destination texture
	D3D12_UNORDERED_ACCESS_VIEW_DESC destTextureUAVDesc = {};
	destTextureUAVDesc.ViewDimension = D3D12_UAV_DIMENSION_TEXTURE2D;

	//Get a new empty command list in recording state
	ID3D12GraphicsCommandList *commandList = GetNewCommandList();

	//Set root signature, pso and descriptor heap
	commandList-&gt;SetComputeRootSignature(mipMapRootSignature);
	commandList-&gt;SetPipelineState(psoMipMaps);
	commandList-&gt;SetDescriptorHeaps(1, &amp;descriptorHeap);

	//CPU handle for the first descriptor on the descriptor heap, used to fill the heap
	CD3DX12_CPU_DESCRIPTOR_HANDLE currentCPUHandle(descriptorHeap-&gt;GetCPUDescriptorHandleForHeapStart(), 0, descriptorSize);

	//GPU handle for the first descriptor on the descriptor heap, used to initialize the descriptor tables
	CD3DX12_GPU_DESCRIPTOR_HANDLE currentGPUHandle(descriptorHeap-&gt;GetGPUDescriptorHandleForHeapStart(), 0, descriptorSize);

	_mipMapTextures-&gt;Enumerate&lt;D3D12Texture&gt;([&amp;](D3D12Texture *texture, size_t index, bool &amp;stop) {
		//Skip textures without mipmaps
		if(texture-&gt;mipMaps &lt;= 1)
			return;

		//Transition from pixel shader resource to unordered access
		commandList-&gt;ResourceBarrier(1, &amp;CD3DX12_RESOURCE_BARRIER::Transition(texture-&gt;_resource, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_UNORDERED_ACCESS));

		//Loop through the mipmaps copying from the bigger mipmap to the smaller one with downsampling in a compute shader
		for(uint32_t TopMip = 0; TopMip &lt; texture-&gt;mipMaps-1; TopMip++)
		{
			//Get mipmap dimensions
			uint32_t dstWidth = std::max(texture-&gt;width &gt;&gt; (TopMip+1), 1);
			uint32_t dstHeight = std::max(texture-&gt;height &gt;&gt; (TopMip+1), 1);

			//Create shader resource view for the source texture in the descriptor heap
			srcTextureSRVDesc.Format = texture-&gt;_format;
			srcTextureSRVDesc.Texture2D.MipLevels = 1;
			srcTextureSRVDesc.Texture2D.MostDetailedMip = TopMip;
			_device-&gt;CreateShaderResourceView(texture-&gt;_resource, &amp;srcTextureSRVDesc, currentCPUHandle);
			currentCPUHandle.Offset(1, descriptorSize);

			//Create unordered access view for the destination texture in the descriptor heap
			destTextureUAVDesc.Format = texture-&gt;_format;
			destTextureUAVDesc.Texture2D.MipSlice = TopMip+1;
			_device-&gt;CreateUnorderedAccessView(texture-&gt;_resource, nullptr, &amp;destTextureUAVDesc, currentCPUHandle);
			currentCPUHandle.Offset(1, descriptorSize);

			//Pass the destination texture pixel size to the shader as constants
			commandList-&gt;SetComputeRoot32BitConstant(0, DWParam(1.0f/dstWidth).Uint, 0);
			commandList-&gt;SetComputeRoot32BitConstant(0, DWParam(1.0f/dstHeight).Uint, 1);
			
			//Pass the source and destination texture views to the shader via descriptor tables
			commandList-&gt;SetComputeRootDescriptorTable(1, currentGPUHandle);
			currentGPUHandle.Offset(1, descriptorSize);
			commandList-&gt;SetComputeRootDescriptorTable(2, currentGPUHandle);
			currentGPUHandle.Offset(1, descriptorSize);

			//Dispatch the compute shader with one thread per 8x8 pixels
			commandList-&gt;Dispatch(std::max(dstWidth / 8, 1u), std::max(dstHeight / 8, 1u), 1);

			//Wait for all accesses to the destination texture UAV to be finished before generating the next mipmap, as it will be the source texture for the next mipmap
			commandList-&gt;ResourceBarrier(1, &amp;CD3DX12_RESOURCE_BARRIER::UAV(texture-&gt;_resource));
		}

		//When done with the texture, transition it&apos;s state back to be a pixel shader resource
		commandList-&gt;ResourceBarrier(1, &amp;CD3DX12_RESOURCE_BARRIER::Transition(texture-&gt;_resource, D3D12_RESOURCE_STATE_UNORDERED_ACCESS, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
	});

	//Close and submit the command list
	commandList-&gt;Close();
	SubmitCommandList(commandList);

	_mipMapTextures-&gt;RemoveAllObjects();
}
</code></pre>
<p>This is the compute shader code:</p>
<pre><code>Texture2D&lt;float4&gt; SrcTexture : register(t0);
RWTexture2D&lt;float4&gt; DstTexture : register(u0);
SamplerState BilinearClamp : register(s0);

cbuffer CB : register(b0)
{
	float2 TexelSize;	// 1.0 / destination dimension
}

[numthreads( 8, 8, 1 )]
void GenerateMipMaps(uint3 DTid : SV_DispatchThreadID)
{
	//DTid is the thread ID * the values from numthreads above and in this case correspond to the pixels location in number of pixels.
	//As a result texcoords (in 0-1 range) will point at the center between the 4 pixels used for the mipmap.
	float2 texcoords = TexelSize * (DTid.xy + 0.5);

	//The samplers linear interpolation will mix the four pixel values to the new pixels color
	float4 color = SrcTexture.SampleLevel(BilinearClamp, texcoords, 0);

	//Write the final color into the destination texture.
	DstTexture[DTid.xy] = color;
}
</code></pre>
<h2 id="essential-things-to-understand-when-getting-started-with-d3d12">Essential things to understand when getting started with D3D12</h2>
<p>I had a very hard time understanding descriptors, descriptor heaps and the root signature btw.<br>
Turns out that the root signature has to somewhat fit the shader and describes the shader uniform, texture and sample data. Descriptors, which are also called &quot;views&quot;, such as &quot;shader resource views&quot; (SRV), &quot;constant buffer views&quot; (CBV) and &quot;unordered access views&quot; (UAV) and don&apos;t have much to do with &quot;Descriptors&quot; such as D3D12_RESOURCE_DESC, but instead tell the root signature where to find the data to be used in the shader. They happen to be allocated on a descriptor heap and since it is not recommended to switch the heap all the time, should all be known before the command list is generated. And then instead of switching the heap, a handle to the elements to use on the heap can be changed all the time. That last part is done with SetComputeRootDescriptorTable, but it is also possible to directly set a limited amount constants. For a bit more context of what I am saying, just look at my code above, as it needs all this.</p>
<p><strong>Edit:</strong> Since gamma correct rendering is the right thing to do, I found out that the above code does not work for srgb texture formats as unordered access is not available for those. I solved it by copying my srgb resource into a none srgb resource (rgba_8888_srgb can for example be copied into a rgba_8888 resource) using commandList-&gt;CopyResource(dest, src). One issue with this approach is that the mipmap generation is happening in gamma space. I solved it by sampling the four pixels individually and applying a gamma curve using pow(color, 2.2) on the four samples, averaging them and transforming them back into gamma space using the inverse (pow(color, 1.0/2.2)).<br>
Also the above code does not clean up the resources it created. The tricky part is releasing them once they are not needed by the GPU anymore, which is not at the end of the function, but somewhen later. An easy solution would be to submit the command list to the queue and wait for it to finish. The solution I am using is a global fence that is checked every frame and if it is bigger than the one for the frame the command list generating the mipmaps was generated for, a callback on the command list ist called. That callback can then safely release the resources.</p>
]]></content:encoded></item><item><title><![CDATA[IGJAM16]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2>
<p>About half a year ago I ordered a ticket for Elbriot, which is a small metal festival in the middle of Hamburg. It used to be just one day, but was turned into a two day festival this year with bands such as Sabaton and Slayer playing. I was</p>]]></description><link>https://slindev.com/igjam16/</link><guid isPermaLink="false">5d476d4fb2992b6818b1a359</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Fri, 26 Aug 2016 23:42:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/Bildschirmfoto-2016-08-19-um-16.34.12.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://slindev.com/content/images/2019/08/Bildschirmfoto-2016-08-19-um-16.34.12.png" alt="IGJAM16"><p>About half a year ago I ordered a ticket for Elbriot, which is a small metal festival in the middle of Hamburg. It used to be just one day, but was turned into a two day festival this year with bands such as Sabaton and Slayer playing. I was really looking forward to it.</p>
<p>But then in June another Innogames Gamejam was announced. Bigger, better, Gamescom. The idea was to have a gamejam in one of the big congress halls above the Gamescom during the first three days of the Gamescom. The 50&#x20AC; ticket included a Gamescom ticket for the three days, the possibility to sleep there and food. So all in all a really nice deal and thanks to founding from Intel and other sponsors they flew in people from all over the world.</p>
<p>Of course I immediately registered :D</p>
<h2 id="travelling-and-first-day">Travelling and first day</h2>
<p>I travelled by train and got there a day early in the middle of the day and got a cheap room in some hostel close to the K&#xF6;ln Messe (Which all together was still cheaper than the early train on Wednesday).</p>
<p>We were supposed to get our tickets at the Gamescom at 10, so I arrived at 9:20 or something and had to wait until 10... Together with lots of others. It would have been nice if we could have gotten in earlier, but I guess they had their reasons.<br>
I suck at talking to people I don&apos;t know, so I was just waiting and eventually got my ticket. Unfortunately they were still preparing the hall for us and didn&apos;t let us in, but were nice enough to allow us to drop off our things. I somehow ended up talking to two turkish and a french guy and we decided to take a short tour around the gamescom until the gamejam started.<br>
We lost the french guy somewhere (he told us to go on while he was checking out some booth).</p>
<p>We didn&apos;t really get to see anything in these 1.5h but had a good idea of what was were. When we made it back to the location of the Gamejam they let us in and being Artist, Gamedesigner and Programmer we decided to be a team.</p>
<p>At some point there was another Gamedesigner and another Programmer interested in helping us, but I scared them away with my choice of SFML... So in the end we were a team of three people.</p>
<p>There was a keynote in the beginning of the jam by Adriel Wallick, which had it&apos;s moments, but in my opinion could have been a bit shorter.<br>
Then the theme was announced: <em><strong>Masks</strong></em></p>
<p>The organisers tried some get to know you games, which weren&apos;t too bad, but not really my thing either -.- A nice touch was them trying to split up existing teams of people knowing each other into new teams by having everyone talk to anyone with similar game choices, which seemed to have worked pretty well.<br>
I sticked to my small team anyways but was still trying to find another programmer that didn&apos;t only want to do Unity. I failed.</p>
<p>Being the press day of the gamescom it was the only day with the possibility to get to see a few games with less than an hour of waiting. I also had a Sony VR presentation scheduled at 16:30. We took something to write and found a first line to stand in and brainstorm some ideas. This was actually a great idea, but unfortunately the line was too long so I had to leave early for my appointment at sony... When I got back the others finally made to the game and got to play for a really long time. I decided to walk around some more and catched an underwhelming presentation of Mafia 3 (I love the series and I am sure the game will be great. But the presentation wasn&apos;t...). Also Playstation VR reminds me a lot of the Oculus DK2, but with lots of interesting adjustments to be made to the headset. If the games are good (and some will be) it makes a solid VR headset.</p>
<p>At this point the gamescom was almost over and we went back to the jam. Somehow we finally came up with an idea for a local multiplayer game.<br>
Then we got dinner which turned out not to be enough for everyone so additionally some pizzas were ordered.<br>
They did learn from it for the next day. But in general the food and drink situation could have been better.</p>
<h2 id="making-the-game">Making the game</h2>
<p>Engin, our Gamedesigner was figuring out gameplay details and Can, the artist made some placeholders for me. Then they both went to bed early and I stayed up until 4:30 to get an earlier prototype working. Based on the good experience with Huitzilopochtly I used SFML and Box2D again and was able to reuse some of the basic code I wrote before. In the end it was possible to move around and throw masks.</p>
<p>I got up for breakfast at 9 (I slept behind our table on my very comfortable new sleeping mat with earplugs), Can and Engin were already doing things.<br>
I spent the day implementing the intended mask bahviour and had something fully working an hour after the playtesting on the stage ended in the evening...<br>
By then I also had most of the graphics.</p>
<p>The last night was used for a start screen and sounds and I even made OSX and Windows builds of the game as a test run and finally decided to sleep at 6:30.<br>
Getting up again at 9 was hard, but being the only programmer I didn&apos;t really expect to sleep at all and it was enough to get me through the day.<br>
This time the others also stayed up as long as I did, working on additional graphics and improving the start screen.</p>
<p>In the morning I implemented a game over screen and animations for everything. And made Windows and OSX builds. Since we still had 10 minutes until the deadline I spent those adding better transitions between screens and a control screen in the beginning and was done 5 minutes late :D.</p>
<h2 id="the-end">The end</h2>
<p>The judges were running around and testing the games. I think three actually saw our game and have no idea if that&apos;s how it was supposed to be or if there should have been more...<br>
We also managed to crash the game a couple of times when playing ourselves, fortunately it worked for others.<br>
There was an award show in the evening at 7. Unfortunately I had to catch my train by then. I am pretty sure we didn&apos;t win anything, but I also don&apos;t know who did.</p>
<p>I spent Saturday at the Elbriot, which was also nice :D.</p>
<h2 id="media">Media</h2>
<p>This is us on the second evening...</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/blubb.jpg" class="kg-image" alt="IGJAM16" loading="lazy" width="960" height="720" srcset="https://slindev.com/content/images/size/w600/2023/10/blubb.jpg 600w, https://slindev.com/content/images/2023/10/blubb.jpg 960w" sizes="(min-width: 720px) 720px"></figure><p>A short gameplay video of the game.</p>
<div class="light-video-player" data-service="youtube" data-id="qdX6xWuQ1Vw"></div>
<p>It can be downloaded on the Gamejam page: <a href="https://www.igjam.eu/jams/igjam-gamescom-2016/156/">Maskplosion</a></p>
<p>You can also find the other games there.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Quadcopter - Part 5: IMU and PID]]></title><description><![CDATA[<h2 id="inertial-measurement-unit">Inertial Measurement Unit</h2><p>I am using the Sparkfun 9DoF Block for Intel Edison which has an accelerometer, gyroscope and magnetometer. And while the accelerometer points down if it does not move it might also sometimes point in any other direction. The gyroscope on the other hand only knows how fast</p>]]></description><link>https://slindev.com/building-a-quadcopter-part-5-imu-and-pid/</link><guid isPermaLink="false">5d476d03b2992b6818b1a34f</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Fri, 26 Aug 2016 23:40:00 GMT</pubDate><content:encoded><![CDATA[<h2 id="inertial-measurement-unit">Inertial Measurement Unit</h2><p>I am using the Sparkfun 9DoF Block for Intel Edison which has an accelerometer, gyroscope and magnetometer. And while the accelerometer points down if it does not move it might also sometimes point in any other direction. The gyroscope on the other hand only knows how fast it rotates into which direction and I am ignoring the magnetometer for now.<br>Also vibrations can cause a lot of noise in all those readings.<br>There are a couple of very clever and complicated ways to combine this data into a good orientation, but I couldn&apos;t get them to work for me since I didn&apos;t fully understand them...</p><p>Fortunately somewhere someone wrote about just combining the rotation information from the accelerometer with the current orientation based on the sum of the previous orientation and the gyroscope data. This is easy enough and turns out to work very well.</p><p>The accelerometer points in the direction a force is applied to it, which will be downwards due to gravity if it isn&apos;t moved otherwise. There is no yaw information in the accelerometer data, but pitch and roll can be calculated like this based on normalized accelerometer values:</p><p><code>float pitch = atan2(-accelerometer.x, sqrt(accelerometer.y*accelerometer.y + accelerometer.z*accelerometer.z))*180.0/M_PI;</code></p><p><code>float roll = atan2(accelerometer.y,( accelerometer.z&gt;0.0f?1.0f:-1.0f)*sqrt(accelerometer.z*accelerometer.z + 0.001f*accelerometer.x*accelerometer.x))*180.0/M_PI;</code></p><p>To get rid of some noise I am also doing some simplistic lowpass filtering after this by lerping the old and new values.</p><p>The gyroscope data is added to the previous orientation (ignoring yaw for now) and the resulting new orientation is then lerped with orientation from the accelerometer, currently weighting the gyro based data very strongly.</p><p><code>orientation = Quaternion::WithLerpSpherical(_accelerometer, _gyroscope, 0.99f);</code></p><p>The result is quite stable and accurate.</p><h2 id="proportional-integral-differential-controller">Proportional-Integral-Differential Controller</h2><p>I thought I could get away with just the proportinal part but it turned out not to work very well, so I implemented the whole thing which is easy enough to do.<br>Wanting to be clever I am not controlling the angle but the distance between each motors actual position and it&apos;s target position. I have a feeling that this is not going to work out very well once I include yaw into it, but for now it does the job.<br>What turned out to be tricky is finding good values for the P, I and D part.<br>I started out with the quadcopter on the floor of my room and had to reprint lots of parts because of it doing anything but hovering.</p><p>Then I built this thing where I attached two arms to a string and only tested with one axis. My current values still aren&apos;t great, but somewhat working.</p><p>It somehow stays not level at some point, but checking the IMU data it knows about it and is controlling the motors correctly. I think for some reason one of the motors is just spinning slower than the other for the same value sent to the speed controller. I am currently investigating this...</p><p>As always the code can be found on github:</p><ul><li><a href="https://github.com/Slin/Chloe">Quadcopter</a></li><li><a href="https://github.com/Slin/ChloeControl">Client for the remote control part</a></li><li><a href="https://github.com/Slin/ChloeFrame">Blender files for the frame</a></li></ul><p>And as you can see there is also the software to control it with. It uses enet for the networking and implements controls using a PS4 controller connected with a cable to the PC running the program.<br>It is all still very much WIP, but might be good enough for a first flight soon :).</p>]]></content:encoded></item><item><title><![CDATA[Building a Quadcopter - Part 4: Printing the frame]]></title><description><![CDATA[<h2 id="my-setup">My Setup</h2>
<p>I own an Ultimaker 2 and upgraded it with the <a href="https://ultimaker.com/en/community/8689-custom-heater-block-to-fit-e3d-nozzle-on-ultimaker-2-the-olsson-block">Olsson Block</a> which is a custom heater block that allows for easy nozzle switching.<br>
I used <a href="https://ultimaker.com/en/products/cura-software">Cura</a> as slicer in the past, but switched to <a href="https://www.simplify3d.com/">Simplify3D</a>. Simplify3D is faster than Cura (but speed with Cura isn&apos;t</p>]]></description><link>https://slindev.com/building-a-quadcopter-part-4-printing-the-frame/</link><guid isPermaLink="false">5d476c8cb2992b6818b1a346</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Fri, 26 Aug 2016 23:38:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/IMG_1168.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="my-setup">My Setup</h2>
<img src="https://slindev.com/content/images/2019/08/IMG_1168.jpg" alt="Building a Quadcopter - Part 4: Printing the frame"><p>I own an Ultimaker 2 and upgraded it with the <a href="https://ultimaker.com/en/community/8689-custom-heater-block-to-fit-e3d-nozzle-on-ultimaker-2-the-olsson-block">Olsson Block</a> which is a custom heater block that allows for easy nozzle switching.<br>
I used <a href="https://ultimaker.com/en/products/cura-software">Cura</a> as slicer in the past, but switched to <a href="https://www.simplify3d.com/">Simplify3D</a>. Simplify3D is faster than Cura (but speed with Cura isn&apos;t a problem either) and the slicing quality is better most of the time. Comparing overhangs with both slicers those from Simplify3D just turn out better and it has a great manual support feature. But the default settings seem to have some retraction issues with my printer and just changing the layer height can cause the parts to be extremely weak.<br>
I also setup a Raspberry Pi with <a href="http://octoprint.org/">OctoPrint</a> and connected a webcam to it. This allows me to monitor a print even if I am not around my printer, so if anything is wrong I can just cancel the print and don&apos;t waste a lot of filament. I also attached a servo to my printers on/off switch to remotely turn it on and off.</p>
<h2 id="materials">Materials</h2>
<p>While PLA and ABS are the two most common materials in fused deposition modeling, there are a couple of different filament manufacturers with their own takes on materials with different properties.<br>
So far I experimented with PLA, ABS, colorFabb XT and colorFabb CF20.</p>
<h3 id="pla">PLA</h3>
<p>PLA has a low melting point (~160&#xB0;C) and a low glass transition temperature (~60&#xB0;C) which makes it very easy to print but not suitable for applications were it can get hot (&gt;55&#xB0;C). It also isn&apos;t very stiff and somewhat soft.<br>
In my experience it is hard to do anything wrong with PLA, it just always turns out well and is what my current quadcopter prototype is made of.<br>
I am printing this at 215&#xB0;C and directly on my build plate at 60&#xB0;C</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/IMG_1207.jpg" class="kg-image" alt="Building a Quadcopter - Part 4: Printing the frame" loading="lazy" width="2000" height="1500" srcset="https://slindev.com/content/images/size/w600/2023/10/IMG_1207.jpg 600w, https://slindev.com/content/images/size/w1000/2023/10/IMG_1207.jpg 1000w, https://slindev.com/content/images/size/w1600/2023/10/IMG_1207.jpg 1600w, https://slindev.com/content/images/size/w2400/2023/10/IMG_1207.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="abs">ABS</h3>
<p>ABS has a much higher melting point (~230&#xB0;C) and glass transition temperature (~100&#xB0;C) than PLA.<br>
While these are useful properties for some applications it also makes it harder to print. The main issue I have been having was keeping it fixed on the build plate. Just like everything else, it gets a bit smaller when cooling down, but with the higher temperatures it cools down a lot faster than PLA and pulls it off the build plate. What helps is a higher room temperature and a hotter build plate. A brim is also useful. Some people encase their printers to keep the heat in.<br>
What works for me is turning the heaters on to the maximum, a print temperature of 260&#xB0; and the build plate at 115&#xB0;.<br>
My prints didn&apos;t turn out perfect, but good enough to use.</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/IMG_0383.jpg" class="kg-image" alt="Building a Quadcopter - Part 4: Printing the frame" loading="lazy" width="2000" height="2667" srcset="https://slindev.com/content/images/size/w600/2023/10/IMG_0383.jpg 600w, https://slindev.com/content/images/size/w1000/2023/10/IMG_0383.jpg 1000w, https://slindev.com/content/images/size/w1600/2023/10/IMG_0383.jpg 1600w, https://slindev.com/content/images/size/w2400/2023/10/IMG_0383.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="colorfabb-xt">colorFabb XT</h3>
<p>I think he idea of this is to find a middle way between the properties of ABS and the ease of use of PLA.<br>
The problems are similar to those with ABS, but printing it at similar settings turns out much nicer.</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/IMG_0382.jpg" class="kg-image" alt="Building a Quadcopter - Part 4: Printing the frame" loading="lazy" width="2000" height="2667" srcset="https://slindev.com/content/images/size/w600/2023/10/IMG_0382.jpg 600w, https://slindev.com/content/images/size/w1000/2023/10/IMG_0382.jpg 1000w, https://slindev.com/content/images/size/w1600/2023/10/IMG_0382.jpg 1600w, https://slindev.com/content/images/size/w2400/2023/10/IMG_0382.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="colorfabb-cf20">colorFabb CF20</h3>
<p>This is the material I would like to use for my quadcopter in the future. It contains 20% carbon fiber and is a lot stiffer than the other materials I tried.<br>
Printing is again at similar settings to ABS. The main problem I have with it is that it likes to stick to the nozzle and build a blobb which then at some point ends up in the print somewhere. This can be acceptable, but sometimes happens to be in critical spots...<br>
Also this one is known for destroying nozzles, so a hardened steel nozzle should be used.</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/IMG_1168.jpg" class="kg-image" alt="Building a Quadcopter - Part 4: Printing the frame" loading="lazy" width="2000" height="2667" srcset="https://slindev.com/content/images/size/w600/2023/10/IMG_1168.jpg 600w, https://slindev.com/content/images/size/w1000/2023/10/IMG_1168.jpg 1000w, https://slindev.com/content/images/size/w1600/2023/10/IMG_1168.jpg 1600w, https://slindev.com/content/images/size/w2400/2023/10/IMG_1168.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h2 id="printing">Printing</h2>
<p>My current parts are all printed in PLA at a layer height of 0.15mm with Simpify3Ds default speed settings. Most of the frame is using 30% infill, except for the parts the motors are mounted on which use 100% infill. I found out that at 30% they start vibrating extremely, while at 100% there are only very little vibrations.<br>
For this part and the arms I am using a brim because some areas of these don&apos;t stick to the build plate without.<br>
Most important for parts sticking to the build plate is to keep the build plate clean. I am cleaning it with spiritus based glass cleaner before each print and it makes a big difference.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Quadcopter - Part 3: Designing the frame]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2>
<p>I am not exactly great at this whole thing and I am constantly trying to improve things, so this is based on my design at the time of writing which is the result of a lot of trial and error.<br>
I should probably be using a nice CAD tool,</p>]]></description><link>https://slindev.com/building-a-quadcopter-part-3-designing-the-frame/</link><guid isPermaLink="false">5d476b10b2992b6818b1a32b</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Fri, 26 Aug 2016 23:32:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/IMG_0426.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<img src="https://slindev.com/content/images/2019/08/IMG_0426.jpg" alt="Building a Quadcopter - Part 3: Designing the frame"><p>I am not exactly great at this whole thing and I am constantly trying to improve things, so this is based on my design at the time of writing which is the result of a lot of trial and error.<br>
I should probably be using a nice CAD tool, but I am not exactly familiar with any so my tool of choice is <a href="http://blender.org">Blender</a>.<br>
The focus of my design is printability and functionality.<br>
Most important for printability is to avoid overhangs as much as possible because the usually don&apos;t turn out as well as you&apos;d wish, even with supports which need to be removed once printing is done and usually require some additonal sanding.</p>
<h2 id="designing-the-body">Designing the body</h2>
<p>Since the batterie is the biggest and heaviest part, I started my design with that and went from there. But since I got differently sized batteries I also need a way that supports different sizes.<br>
My result looks like this:<br>
<img src="https://slindev.com/content/images/2023/10/Image-2016-05-08-at-3.18.07-.png" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>The two lower parts are glued together and hold the batterie. The lowest part can be printed in different sizes for different batteries and should hold it tightly. The holes are in there to get air close to the batterie in case it gets warm.<br>
The lower parts together slide into the top part and are held in place by the stick.<br>
The top part can have several slots for the stick to support the different batterie cases.<br>
The center part has some guides for the lower part to make the glueing easier.</p>
<p>This is my printed result for the small batterie:<br>
<img src="https://slindev.com/content/images/2023/10/IMG_1208.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<h2 id="designing-the-arms">Designing the arms</h2>
<p>Important for the arm length is that the propellers don&apos;t touch! PLA but even material with carbon fiber is not perfectly stiff so you should also keep some additional distant between the motors.</p>
<p>The resulting arm length can be calculated with pythagoras (and then some transformations): (radius*2+safety)/sqrt(2) = length</p>
<p>My propellers have radius of 12.5cm, I decided on 8cm space for safte: (12.5*2+8)/sqrt(2) = 33/sqrt(2) = ~24cm</p>
<p>The arms turned out to be a bit longer than that in the end, but I used this as my minimum.</p>
<p><img src="https://slindev.com/content/images/2023/10/Image-2016-05-08-at-3.44.09-.png" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"><br>
The blue part has holes to fit the scrows for the motor and slides into the actual arm. Both can be held together with a nail or something fitting the small hole at the side of both parts.</p>
<p>The arm has some guides for the cables and space for the ESC:<br>
<img src="https://slindev.com/content/images/2023/10/Image-2016-05-08-at-3.50.27-.png" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>Because the connection between the blue part and the arm works very well I started out with a similar approach to connect the arm to the body, but for several different reasons it didn&apos;t really work out. And while I did print a somewhat working quadcopter with that design and a lot of glue, at some point one of the arms broke off and gave up on my tries to fix it. Instead I changed the connection design:<br>
<img src="https://slindev.com/content/images/2023/10/Image-2016-05-08-at-4.00.44-.png" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"><br>
It turned out to be very stable, but the print quality needs to be high otherwise things won&apos;t slide together well and one or both parts will break. Some sanding can also help improve the situation.</p>
<h2 id="putting-the-edison-on-top-of-everything">Putting the edison on top of everything</h2>
<p>I modeled a simple case for the edison, but didn&apos;t print it yet, so it may or may not do the job... It does allow for a bit of airflow around the edison which will be facing the bottom of the case because I will have the spark fun display block at the top.<br>
<img src="https://slindev.com/content/images/2023/10/Image-2016-05-08-at-4.19.27-.png" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"><br>
The two highlighted parts will be glued together.</p>
<h2 id="the-current-state-of-my-quadcopter">The current state of my quadcopter</h2>
<p>It is only missing two more arms, which are half way through printing while I write this. Printing two arms at once takes about 9h at my current print settings. And of course the edison case is also missing, but I&apos;ll hopefully get to that in the next few days.<br>
On the coding side I managed to get some controls working to keep the quadcopter to hold its orientation, but I still need to find some good values to make it work with the real thing.<br>
I also experimented printing with carbon fiber filament and plan to replace some of the quadcopter parts with it, but printing it well is a lot harder than with PLA.<br>
<img src="https://slindev.com/content/images/2023/10/IMG_1212.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<h2 id="some-older-photos">Some older photos</h2>
<p>Iterations on the arm design, oldest on top, latest not there ;)<br>
<img src="https://slindev.com/content/images/2023/10/IMG_0420.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>The stupid old connection<br>
<img src="https://slindev.com/content/images/2023/10/IMG_0394.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>Two arms with motor and ESC<br>
<img src="https://slindev.com/content/images/2023/10/IMG_0398.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>The whole thing<br>
<img src="https://slindev.com/content/images/2023/10/IMG_0426.jpg" alt="Building a Quadcopter - Part 3: Designing the frame" loading="lazy"></p>
<p>Somewhat flying...</p>
<div class="light-video-player" data-service="youtube" data-id="WYUeaFBT6vM"></div>]]></content:encoded></item><item><title><![CDATA[Building a Quadcopter - Part 2: Making the motors rotate]]></title><description><![CDATA[<h2 id="preparing-the-hardware">Preparing the hardware</h2>
<p>The Afro ESC supports different ways of communication (PWM, I2C, UART), but PWM is most common for these and those pins are nicely exposed with a JR-Style Servo cable.</p>
<p>The Edisons PWM block did just have the pins exposed as some holes, so I got cables with</p>]]></description><link>https://slindev.com/building-a-quadcopter-part-2-making-the-motors-rotate/</link><guid isPermaLink="false">5d476a30b2992b6818b1a31e</guid><dc:creator><![CDATA[Nils Daumann]]></dc:creator><pubDate>Fri, 26 Aug 2016 23:28:00 GMT</pubDate><media:content url="https://slindev.com/content/images/2019/08/IMG_0374.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="preparing-the-hardware">Preparing the hardware</h2>
<img src="https://slindev.com/content/images/2019/08/IMG_0374.jpg" alt="Building a Quadcopter - Part 2: Making the motors rotate"><p>The Afro ESC supports different ways of communication (PWM, I2C, UART), but PWM is most common for these and those pins are nicely exposed with a JR-Style Servo cable.</p>
<p>The Edisons PWM block did just have the pins exposed as some holes, so I got cables with the counter part (based on the form of the actual contacts, the ESCs have female connectors, so I needed male ones, but based on the overal form of these it would be the other way around, and both ways are sometimes used...).<br>
I soldered those cables to the board and connected a jumper for it to use the system voltage to power the PWM, otherwise I would have had to connect an additional power source to the PWM board.</p>
<p>Also these ESCs have a BEC exposed via the same cable, which is unless otherwise stated a 5V source which can be used to power things. In my case it provides a current of up to 500mA, which causes this to be perfectly within the USB specifications.<br>
The BEC is between the red and brown cable coming from the ESC, while the PWM signal will need the orange and the brown cable. Thus I did not attach the red cable to the Edison via the PWM block but instead cut a micro USB cable in half and attached the BEC of one of the ESCs to the USB cable and plugged it into the Edisons base block to power it with. I didn&apos;t not use the power block or anything because the Edison does not support full 5V, but the baseblock does some magic to make it work with USBs 5V.</p>
<figure class="kg-card kg-image-card"><img src="https://slindev.com/content/images/2023/10/IMG_0374.jpg" class="kg-image" alt="Building a Quadcopter - Part 2: Making the motors rotate" loading="lazy" width="2000" height="1500" srcset="https://slindev.com/content/images/size/w600/2023/10/IMG_0374.jpg 600w, https://slindev.com/content/images/size/w1000/2023/10/IMG_0374.jpg 1000w, https://slindev.com/content/images/size/w1600/2023/10/IMG_0374.jpg 1600w, https://slindev.com/content/images/size/w2400/2023/10/IMG_0374.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><h2 id="talking-to-the-escs">Talking to the ESCs</h2>
<p>PWM or pulse width modulation works by sending a high signal for some amount of time and the receiver measures how long the high signal was sent and knows what to do based on that time.<br>
A high signal is anything above 1.5V or something for most of the RC stuff.<br>
In general in the RC world the high signal (or pulse) has a length of around 1ms to indicate a minimum value (the motor does not rotate/the servo gets rotated to its lowest position) and a length of about 2ms for the highest value (motor at full speed/servo fully rotated). Looking at the sparse documentation of these ESCs I found a lowest value at 1060&#x3BC;s and the value for full power at 1860&#x3BC;s.</p>
<p>Mostly for safety reasons ESCs usually don&apos;t allow to start with a rotation, instead wait for an arm value which is something below the minimum value and once that was sent at least one time a speed can be provided.<br>
I used an arm value of about 1000&#x3BC;s but it can also be somewhat smaller and bigger.</p>
<p>The signal needs to be sent constantly though. Any frequency higher than 20Hz or so works, but due to the way some ESCs work there is a habit of sending at much higher frequencies. Just keep in mind that to send a 2ms signal you can&apos;t go faster than 1s/2ms = 500Hz. I am currently sending at around 400Hz.</p>
<h2 id="pca9685">PCA9685</h2>
<p>That is the PWM controller used by the Sparkfun block. It is made for dimming LEDs, but also works for other usecases...<br>
It supports 24Hz up to 1526Hz, communicates via I2C and has a 12bit resolution of 4096 steps.<br>
The 12 bit resolution means that when for example operating at 100Hz the resulting 10ms (1s/100) are split into 4096 evenly spaces parts and the pulse length has to be a multiple of 10ms/4096 = ~0.0024ms.<br>
And instead of sending a time to the PWM controller, it expects a number of steps. The PWM controller then scales this with the value it knows based on what its pre scale register is set to. This same pre scale register also sets the signal frequency.</p>
<h2 id="the-code">The code</h2>
<p>Fortunately Sparkfun provides some code for the PWM block which takes care of the I2C communication and provides some functions to set stuff (but none takes a time directly...).</p>
<p>So far I just wrote some very minimalistic wrapper over their code initializing everything with what they provide as &quot;servo-mode&quot;, set a higher frequency (prescaler to 14 for ~400Hz) and then send an arm signal.<br>
Once that is done a speed as value between 0 and 1 can be set for each motor.</p>
<p>I thought I figured out the math for the correct timings, but it turned out that the actual number of steps was quite different, so my math was probably wrong...<br>
I found those by trial and error (prescaler 14):<br>
Arm - 1730<br>
Minimum - 1960<br>
Maximum - 3500</p>
<p>The code is on <a href="https://github.com/Slin/Chloe">github</a>.</p>
<h2 id="my-toolchain">My toolchain</h2>
<p>The Edison runs Yocto in the latest version for it provided by Intel.<br>
I am connecting to it via SSH over Wifi and compile directly on it using GCC.<br>
My IDE of choice is CLion which bases it&apos;s projects on CMake and has a feature to directly transfer the files to a server which in my case is via sftp to the Edison.<br>
I plan on improving it by adding custom build configurations running the same stuff I currently use the terminal for, but I am not sure if that will really make anything better...</p>
<h2 id="profit">Profit!</h2>
<div class="light-video-player" data-service="youtube" data-id="EDccxPMg7hc"></div>]]></content:encoded></item></channel></rss>