<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[3d6564]]></title><description><![CDATA[building an open-source tool and having fun along the way]]></description><link>https://blog.3d6564.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 11:28:25 GMT</lastBuildDate><atom:link href="https://blog.3d6564.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[cory robinson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[3d6564@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[3d6564@substack.com]]></itunes:email><itunes:name><![CDATA[Cory Robinson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Cory Robinson]]></itunes:author><googleplay:owner><![CDATA[3d6564@substack.com]]></googleplay:owner><googleplay:email><![CDATA[3d6564@substack.com]]></googleplay:email><googleplay:author><![CDATA[Cory Robinson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[I tried studying for the CISSP]]></title><description><![CDATA[and passed.]]></description><link>https://blog.3d6564.com/p/i-tried-studying-for-the-cissp</link><guid isPermaLink="false">https://blog.3d6564.com/p/i-tried-studying-for-the-cissp</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Fri, 13 Mar 2026 20:56:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2b02e437-35d2-45d5-a954-8e935ee185b6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you spend any time reading about the Certified Information Systems Security Professional (CISSP), you&#8217;ll eventually see things like:</p><blockquote><p>&#8220;How I passed the CISSP in 7 days!"</p></blockquote><p>Is it possible to do this? Sure. Is it realistic? No. Anyone who does it will either be the G.O.A.T. at test taking <strong>OR</strong> will already have plenty of necessary experience in cyber security. This is not going to be about the contents of the test itself, but what may help <strong>YOU</strong> pass.</p><p>Personally, I took a much slower approach.. The <strong>six-month</strong> approach. The kind of approach where I started with 2 kids and ended with 3 kids. </p><p><em>Yes, I had a kid along the journey. Yes, I&#8217;m insane.</em></p><p>I studied for six months, typically about <strong>2-6 hours per week</strong>. Some weeks I didn&#8217;t even study. I said I had a third kid, remember? </p><p>What helped was the consistent exposure over a long period of time. This helped immensely with knowledge retention! There is one more secret to my success, <strong>years of experience</strong> in several domains of the CISSP. Here are the <strong>SIX</strong> resources I used, all less than $100 each.</p><h3>Syracuse University&#8217;s O2O Program</h3><p>Since I&#8217;m a veteran, I was eligible to go through <strong><a href="https://ivmf.syracuse.edu/success-stories/o2o-ss/">Syracuse University&#8217;s Onward to Opportunity (O2O)</a></strong> program. I understand not everyone will qualify.</p><p>They no longer offer CISSP, which sucks.. but what it really gave me was structure.</p><p>Their program forced me to watch videos and take practice exams. If you sign up for Mike Chapple, <a href="https://www.udemy.com/course/isc2-cissp-full-course-practice-exam/">Jason Dion</a>, or any other highly rated course that comes with practice exams, you&#8217;ll be fine. You need a <strong>QUALITY</strong> online course that gives you structure and practice exams to dive into at the end.</p><h3>Mike Chapple&#8217;s LinkedIn Learning Course</h3><p><a href="https://certmike.com/">Mike Chapple</a> is one of the go-to instructors for CISSP. His content is high quality. </p><p>His <strong>LinkedIn Learning</strong> course was great and provided a <strong>walk-through of all eight domains</strong>. I used it as a refresher and to help to provide a mix-up from the O2O material. The eight domains cover a <strong>WIDE</strong> range of knowledge. </p><p>You are almost never going to be in a job that hits all of them regularly.</p><h3>Pocket Prep</h3><p>Live with Pocket Prep. Breathe with Pocket Prep. Sleep with Pocket Prep.. Okay.. maybe not all that. This was easily my most used resource. </p><p>In the bathroom? <strong>Pocket Prep.</strong> Insomnia? <strong>Pocket Prep.</strong> Waiting in line? <strong>Pocket Prep.</strong></p><p>The ways to use it are endless. It explains every question and where it is in the <strong>ISC2 </strong>material.</p><p>It helped me reinforce the material in the online videos I was watching, without allowing me to fully memorize practice test answers. <strong>As I watched a video about a CISSP domain, I would do some questions in that domain.</strong> Pocket Prep allows that.</p><h3><a href="https://www.youtube.com/watch?v=gKe88tIeVYo">Kelly Handerhan</a>&#8217;s CISSP Mindset</h3><p>Her video is <strong>the most important video</strong> you can watch. Period. I found it very late in the game.</p><p>Watch her video at the beginning of your journey. Watch it <strong>AGAIN</strong> 1-2 weeks before your test. Then watch it <strong>ONE MORE TIME</strong> on test day. Yes. Watch it many times. Worship it. </p><p>Be one with the video. You&#8217;ll get it once you take the test.</p><h3>CISSP Exam Cram</h3><p>This video is a strange one.. I&#8217;m not sure if it helped. I think it helped in the sense of giving me structure to my last week. <a href="https://www.youtube.com/watch?v=_nyZhYnCNLA">Pete Zerger&#8217;s cram video</a> covers all of the domains in <strong>8 hours</strong>. </p><p>That is a LOT of content in a short time frame.</p><p>The last ~7-8 days you can basically watch <strong>ONE domain per day</strong> and then do practice questions on that domain. This is a method of <strong>reinforcement learning</strong>..</p><p>So I think it helped.</p><h3>The OFFICIAL ISC2 Study App</h3><p>You may wonder why I mention the OFFICIAL app last. Honestly.. I only found out about it <strong>literally</strong> one week before my test day. </p><p>I found ISC2&#8217;s official app to accurately reflect the <strong>STYLE</strong> of questions on the test. Who would have thought that <strong>ISC2</strong> made their study app be an accurate reflection of the test?</p><p>Why do I mention it last? I suggest you <strong>DO NOT</strong> touch this app until 2 weeks away from test day. Use it at the end so you do not memorize the test and answers it has. </p><p>Why? You are using it to confirm readiness rather than to learn new content.</p><p>As I answered the ISC2 questions, I felt I knew the material. The answers made sense. I was thinking the way a cyber security leader does. It was coming together. <strong>No answer memorizing happened.</strong></p><p>Use the app to do a knowledge check during your last week.</p><h3>My summarized strategy</h3><p>My overall study strategy simplified:</p><ol><li><p>Study casually over a long period</p></li><li><p>Use Pocket Prep and a Udemy or LinkedIn learning course to <strong>LEARN</strong></p></li><li><p>Review wrong questions in Pocket Prep over and over</p></li><li><p>Go through practice tests</p></li><li><p>1-2 weeks from exam day start <strong>CRAM</strong> week</p></li></ol><p>What does CRAM week look like?</p><ul><li><p>Start ~8 days in advance</p></li><li><p>Watch content on <strong>ONE</strong> domain per day</p></li><li><p>Do ISC2 official app questions in that domain after watching</p></li><li><p>Ensure you are answering the official app questions <strong>at least 80% right</strong></p></li><li><p><strong>If you are not answering the questions well, move your exam day out</strong></p></li></ul><h3>Final Advice</h3><p>This is a test of your mindset. Not a test about remembering ports, protocols, or how to capture network packets.</p><p><strong>Think like a leader</strong>.</p><p>Watch Kelly Handerhan&#8217;s video. You <strong>WILL</strong> pass the test with the right attitude.</p>]]></content:encoded></item><item><title><![CDATA[I tried an immutable Windows replacement...]]></title><description><![CDATA[and then I realized I needed it to be mutable.]]></description><link>https://blog.3d6564.com/p/i-tried-an-immutable-windows-replacement</link><guid isPermaLink="false">https://blog.3d6564.com/p/i-tried-an-immutable-windows-replacement</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Mon, 24 Nov 2025 20:47:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b7b27b4c-5789-4a8d-81a4-278f7067b1ed_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Sorry for the quite long delay.. Blogging is a bit of an experiment for me. I&#8217;m using it as a way to practice organizing my thoughts. </em></p><p><em>Farther down the page I include the exact steps and commands to add a USB-C Jaden printer to Bazzite OS or other similar Fedora based Linux Distros. Feel free to skip ahead.</em></p><h2>In the beginning</h2><p>Over the last year I have been growing tired of Windows as my daily driver OS. What sent me over the edge was a recent Windows 11 update that changed the load order of my Framework 16&#8217;s WiFi driver and would <em>Blue Screen Of Death</em> (BSOD) randomly. I went down the path of many fixes, and some seemed like they fixed the issue until they didn&#8217;t. I went down the path of driver updates, driver resets, fresh installing Windows updates, etc. </p><p>None of those fixes worked. How could I give up the conveniences of Windows though? I dealt with random BSODs for about 3 or 4 months.. hoping. Waiting&#8230; More waiting.. I was hoping some fix would come. It didn&#8217;t. Then Microsoft announced Windows would become an agentic operating system. I&#8217;m not sure I can go a single day without AI or agentic being mentioned at least once now. Enter Linux.</p><h2>I&#8217;m not a psychopath</h2><p>Listen.. <em><strong>PewDiePie even switched to Linux this year.</strong></em> Our savior of memes. He, however, is clearly a masochist and chose Arch Linux. I&#8217;m not ready for that level of struggle. I need something that works reliably and requires little troubleshooting. I had only a two requirements.</p><ul><li><p>Gaming friendly (<a href="https://itsfoss.com/linux-gaming-distributions/">Foss article on gaming distros</a>)</p></li><li><p>Framework 16 compatibility <a href="https://frame.work/linux">(Framework supported distros)</a></p></li></ul><p>You may say I should just use Arch Linux, especially as a Framework user.. and I would agree. I have a Framework 16. It is, honestly, probably the reason Windows 11 was not working well for me. I am clearly willing to suffer endlessly for no reason just for the sake of tinkering. I will pretend like that is not the case here, though. </p><p>If you looked at the links for my two requirements you&#8217;ll see two distros cross both articles. <strong>Ubuntu</strong> and <strong>Bazzite</strong>. I use Ubuntu as my base distro for every homelab project. It comes with very high compatibility and support. I have enjoyed it. I was willing to try something new. Something fresh. I am an elite gamer. I need games like <em>Megabonk</em> or <em>Escape from Duckov</em> to work.</p><h2>What is Bazzite</h2><p>What led me to choose Bazzite is that it seemed like a well supported distro. I liked a few things about it, especially the gaming-oriented focus. It comes with Steam. It is clearly meant for gamers. Its Fedora based&#8230; Which, prior to Bazzite, I&#8217;ve never used a Fedora distro, but it sounds nice. Reddit says Fedora is &#8220;<a href="https://www.reddit.com/r/Fedora/comments/iqgf5u/why_do_people_use_fedora/">bleeding edge</a>&#8221;, but stable. Stable, you say? So the opposite of Arch Linux? Now I&#8217;m a bit interested. Here is a list of several reasons to choose Bazzite.</p><ul><li><p>Immutable (system files are <strong>read-only</strong>)</p></li><li><p>Gaming oriented</p></li><li><p>Declarative</p></li><li><p>Fedora based (bleeding edge)</p></li><li><p>Stable (Flatpaks and containerized app management)</p></li><li><p>Strong hardware support</p></li><li><p>Community support</p></li></ul><h2>A road bump with Linux</h2><p>Looking back.. My first major challenge could have been a deal breaker. I chose an immutable operating system. I&#8217;m not supposed to change things. I should not be looking to modify it. I should be using everything out of the box. No modifications. Easy, peasy. Nope.. Wrong. </p><p>I sell some 3D prints on Etsy and I periodically need to print shipping labels. I happen to use a bluetooth Jadens BY-C10 label printer that I <strong>need</strong> to work with my laptop. This printer makes my life so much easier. It saves a LOT of time with shipping labels. Welp, bluetooth doesn&#8217;t work. Nothing detected. What about USB-C? It detects it, but nothing. </p><p>How do you change an immutable OS? I head off to Jaden&#8217;s driver website <a href="https://jadens.com/pages/download-video">here</a>. Linux support! Perfect! I download the zip and I find some <em>deb </em>and <em>rpm</em> files.. Well.. That sucks. I don&#8217;t have an option to open these file types on Bazzite. ChatGPT&#8230; help me.</p><h2>Adding the Jaden print drivers</h2><p>The below steps were the steps I used to get the printer working. ChatGPT helped.. but it sure made it more difficult. The amount of extra commands and junk it fed is indescribable. </p><p>First you need to use <code>rpm-ostree</code> to add the RPM layer to your local machine. Then reboot the machine.</p><pre><code>sudo rpm-ostree install jadens-printer-driver_3.1.5.491_amd64.rpm
sudo systemctl reboot</code></pre><p><em>If you are doing this a lot, an immutable OS is probably not for you.</em></p><p>Confirm you now see the driver.</p><pre><code>rpm -qa | grep -i jadens</code></pre><p>Next, you can plug in your device and confirm you see it as a usb device. In this instance, I am interested in CUPS (*NIX printing system) detecting it.</p><pre><code>lpinfo -v</code></pre><p>I still ran into some issues where it was having issues being detected and did a few more troubleshooting steps. I did not see my device on the output above. The next steps were all done together in series and the printer was then detected and in CUPS.</p><pre><code>sudo systemctl enable --now cups cups-browsed
sudo systemctl restart cups</code></pre><p>I then wanted the printer to be detectable in my browser, Brave. So then I needed to restart Brave.</p><pre><code>pkill -TERM brave</code></pre><p>After I forced Brave to restart I navigated to <code>http://localhost:631/admin</code>. Log into that using your local machine credentials and go through the respective <strong>Add Printer</strong> steps. I then restarted Brave once again, because why not.</p><h2>Immutable to mutable</h2><p>I&#8217;m not sure yet if I&#8217;ll stick with <strong>Bazzite</strong>. I like the simplicity, it is refreshing. I&#8217;m not quite used to how Fedora based distros work, but I will at least use it for a few months to give it a try. After the printer driver fiasco, I know that if I have to keep doing that then I will look at another distro. What I love about it is the support for Flatpaks and AppImages. All of the apps run in a localized container and that can help keep my local machine secure. </p>]]></content:encoded></item><item><title><![CDATA[DevOps Tips: Automate Your Homelab CI/CD]]></title><description><![CDATA[Don't learn with production.. learn in the lab!]]></description><link>https://blog.3d6564.com/p/devops-tips-automate-your-homelab</link><guid isPermaLink="false">https://blog.3d6564.com/p/devops-tips-automate-your-homelab</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Sat, 04 Jan 2025 15:21:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3d674a2a-db65-4059-983a-ec6d4ccdc31f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intro</h2><p>My DevOps series will cover self-hosting services that will help you learn some DevOps principles. My first post was on why you should version control your homelab, available <a href="https://blog.3d6564.com/p/version-control-your-home-lab-now">here</a>. I won&#8217;t be teaching introductory level topics because there is already a ton of great content out there. So I won&#8217;t cover the basics of docker, linux, networking, etc.</p><p>In this post I&#8217;m covering using Gitea, Gitea Actions, and a few best practice concepts for automating CI/CD in your lab. This will help you think through solutions in your own lab and whatever you may do for work. This lightweight CI/CD can be used to deploy setup configs, security indicators, network configs, and more! It can be integrated with ansible as well, which I may cover in a later post.</p><h2>Why Gitea?</h2><p>There is are a few big players in the code repository space and they are all capable of what I cover.  I chose <a href="https://about.gitea.com/">Gitea</a> mainly because it is lightweight and can be self-hosted. In my homelab, Gitea actually syncs with a private Git repository. This is done because I want a backup of my homelab setup outside my homelab. I am treating that as an offsite backup, which your production environment should have a similar design. Back to Gitea, you can deploy it on basically any hardware. I am running it on a <a href="https://www.raspberrypi.com/products/raspberry-pi-4-model-b/">Raspberry Pi 4B</a> that boots from a 1TB SATA SSD. Here are some more details on Gitea.</p><h4>Key Benefits</h4><ul><li><p>Lightweight &amp; fast: Gitea is built in Go and the project has a goal of being lean and uses few resources</p></li><li><p>Self-hosted: You can maintain full control of the code and infrastructure</p></li><li><p>Built-In CI/CD: <a href="https://docs.gitea.com/next/usage/actions/overview">Gitea Actions</a> allows automating builds, tests, and deployments and just requires an additional docker container</p></li><li><p>Documentation: Gitea has decent documentation and a plethora of writeups on what to do</p></li></ul><h2>Gitea Flow</h2><p>My instructions require you have docker and docker compose available on a host in your homelab. One of the most common ways to deploy Gitea is using Docker Compose. The services are then all accessed using Nginx Proxy Manager. I can do a writeup later on that. If you want to set this up without Nginx then you will need to use a port mapping like <code>- 3000:3000</code> in your compose files. Here is my code and action flow.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q3Wk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q3Wk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 424w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 848w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 1272w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q3Wk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png" width="854" height="456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c97a445-d406-49cc-bc23-f3b359113088_854x456.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:456,&quot;width&quot;:854,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:43255,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q3Wk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 424w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 848w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 1272w, https://substackcdn.com/image/fetch/$s_!q3Wk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c97a445-d406-49cc-bc23-f3b359113088_854x456.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>Gitea</h4><p>I have it use <code>services_default</code> network that all my services run on so they can communicate without exposing ports. I also have it mapping all of the data to <code>/data/gitea</code> so I can easily make backups to protect against hardware failure.</p><pre><code>version: '3.8'
services:
  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    volumes:
      - /data/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    networks:
      - services_default
    restart: unless-stopped
networks:
  services_default:
    external: true</code></pre><h4>Gitea Runner</h4><p>This uses the same network as earlier. There are two additional notes with the Runner though. You need to <em>pass environment variables</em> to it so it can connect to your Gitea instance and you will want to <em>map volumes of services</em> you want your Gitea Actions to be able to update easily. This allows me to pull code changes to the mapped volumes for <a href="https://squidfunk.github.io/mkdocs-material/">MkDocs </a>and <a href="https://gethomepage.dev/">Homepage</a> and then add in necessary API keys for these services after the updates are retrieved.</p><pre><code>version: '3.8'
services:
  runner:
    image: gitea/act_runner:nightly
    environment:
      GITEA_INSTANCE_URL: '${INSTANCE_URL}'
      GITEA_RUNNER_REGISTRATION_TOKEN: '${REGISTRATION_TOKEN}'
      GITEA_RUNNER_NAME: '${RUNNER_NAME}'
      GITEA_RUNNER_LABELS: '${RUNNER_LABELS}'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /data/mkdocs:/docs
      - /data/homepage:/homepage
    networks:
      - services_default
networks:
  services_default:
    external: true</code></pre><h4>CI/CD Flow</h4><p>The Gitea Actions are defined in a foldier <code>.gitea/workflows</code> that is also part of the code repository. You will need to build an action and point it to a flow for this to work. This flow is a basic workflow that will pull updates to mapped volumes and then replace API keys and passwords. It uses stored secrets to access the code repository. Then it runs a shell script that does a string replace with the keys and passwords and their respective environment variable that has been setup. Here is the action workflow.</p><pre><code>name: Deploy Docs

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: self-hosted
    steps:
      - name: Install git
        run: |
          apk update
          apk add --no-cache git
      - name: Clone Repository
        run: |
          if [ ! -d "/tmp/labgeek" ]; then
            git clone https://${{ secrets.REPO_USER }}:${{ secrets.REPO_TOKEN }}@gitea.labgeek.io/ed/labgeek.git /tmp/labgeek
          fi
          cd /tmp/labgeek
          git fetch origin
          git reset --hard origin/main
      - name: Copy docs to docs
        run: |
          cp -r /tmp/labgeek/* /docs/
      - name: Copy homepage to homepage
        run: |
          cp -r /tmp/labgeek/docs/include/homepage/* /homepage/
      - name: Replace API keys
        run: |
          # Navigate to homepage directory
          cd /homepage

          # Load environment and replace variables
          chmod +x ./replace_api_keys.sh
          
          # Replace keys
          ./replace_api_keys.sh</code></pre><h2>Next Steps</h2><p>I did not cover every component but some of the core ones. I hope this inspires you to explore how you could test out CI/CD in your homelab. If you want to take this to the next level, here are some other components you can integrate.</p><ol><li><p>Use a secrets manager service, not environment variables</p></li><li><p>Improve automation with ansible</p></li><li><p>Integrate ssh for services not on the same host (instead of mapping volumes)</p></li></ol><p>There are also some things to consider when automating these components. Managing ssh keys, file permissions, and network permissions were some additional components I had to work through for this to work. </p><h2>Final Thoughts</h2><p>This setup will lay the groundwork for understanding a CI/CD pipeline, and you won&#8217;t risk blowing up production at work! This entire flow can be setup with a couple Raspberry Pi&#8217;s to help simulate network segmentation, ssh keys, and remote file permission issues. The most critical skill you can gain is the desire to learn and troubleshoot. Everyone&#8217;s environment will be a bit different and will require research on how to fix it. </p><p>Thank you for reading and feel free to reach out for questions or help troubleshooting! I would love to hear from you all.</p>]]></content:encoded></item><item><title><![CDATA[CTI 001: Why does Threat Intelligence matter?]]></title><description><![CDATA[CTI is the data analytics and business intelligence of cyber. Fight me.]]></description><link>https://blog.3d6564.com/p/cti-001-why-does-threat-intelligence</link><guid isPermaLink="false">https://blog.3d6564.com/p/cti-001-why-does-threat-intelligence</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Sun, 17 Nov 2024 14:32:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/12d79187-d69c-48d3-afe7-212f25636d23_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intro</h2><p>Threat Intelligence is often discussed as separate from data analytics, but this distinction needs to change. Fundamentally, threat intelligence is data analytics with a focus on threats or adversaries. It often involves custom tools and solutions that aren't necessary. Many of these solutions are tailored to address Extract, Transform, and Load (ETL) problems that exist when working with specialized data. This specialized data can be logs from a private system, combat equipment, or publicly available in a web format.</p><p>In cyberspace, threat intelligence closely mirrors traditional data analytics and engineering. The problems with system integrations are very similar, if not identical in many cases. This became clear when learning about Kafka's use in many security stacks to handle ETL processes for system log data. Kafka is also a key tool to queue massive amounts of data in many big data pipelines. It tends to be the backbone of event driven architectures as well.</p><h2>So What?</h2><p>If your Threat Intel team <em>does NOT have standard operating practices</em> and lacks a communication channel with leadership then you may find my Cyber Threat Intelligence (CTI) series worth a read. You may also find it interesting if your team <em>does NOT have processes that enable integration</em> with your security operations center, developers, or incident responders.</p><p>I'm highlighting Threat Intelligence because it is so similar to traditional data analytics and I see an industry push treating it differently. Threat Intelligence data should influence critical business decisions by key leaders at the <strong>strategic</strong> level. It should also be used by analysts and operators at the <strong>tactical</strong> level. Personally, I call this the CTI sandwich. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j8_S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j8_S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 424w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 848w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 1272w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j8_S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png" width="652" height="620" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:620,&quot;width&quot;:652,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:56864,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j8_S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 424w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 848w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 1272w, https://substackcdn.com/image/fetch/$s_!j8_S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7be9fa04-4d5b-4041-a2f9-5dbae9f73b60_652x620.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Strategery</h2><p>Building on that, CTI is instrumental in supporting decision-making at both the <strong>strategic</strong> and <strong>tactical</strong> levels. It's crucial to understand that CTI doesn't make the decisions itself; rather, it equips others to make the <strong>right</strong> decisions. A common problem is that many organizations fail to integrate CTI into their decision-making processes, but quickly include other traditional data analytics products. Let's break down the two types of decisions that come from threat intel.</p><ul><li><p><em>Strategic Level</em> involves long-term, high-level planning that can shape the direction of an organization. Strategic intelligence will build threat actor profiles, industry trends, and can include geopolitical assessments. The people making these decisions will typically be senior leaders and C-Suite members or their representatives. </p></li><li><p><em>Tactical Level</em> focuses on short-term actions aimed at achieving a specific objective or mission. Tactical intelligence guides incident response and threat mitigation in a network. Several pieces of intelligence at this level are Indicators of Compromise (IOCs), exploitable vulnerabilities within a network, or tactics that may be used to compromise a network.</p></li></ul><p>A study on VPNFilter can show the importance of a good threat intelligence strategy, and how it plays a part at multiple levels. In 2018, the Talos Intelligence Group released data about the VPNFilter malware and potentially targeted systems. This immediately highlights both levels of threat intelligence, strategic and tactical. They released the impacted devices and a fix to reduce risk to this malware. Additionally, they included the end target being SCADA systems. This helped leaders at the strategic level make changes to priorities of work and focus for their organization.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SBGi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SBGi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 424w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 848w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 1272w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SBGi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png" width="627" height="205" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:205,&quot;width&quot;:627,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:64176,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SBGi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 424w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 848w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 1272w, https://substackcdn.com/image/fetch/$s_!SBGi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81249a0c-8c31-4af9-98ab-b3b2f4b5a989_627x205.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3>Strategic Level</h3><p>Leaders at the strategic level are planning long-term operations and making decisions at a higher level than what is needed by analysts and operators at the tactical level. At the end of the day, leadership decisions made need to flow down to security analysts and operators that may implement changes that align with the leadership's decisions. Those decisions need a translation to something actionable.</p><p>It is important that outputs from CTI help improve the decisions made at this level. What does this mean? Don't send leaders a list of IPs or a list of tactics to integrate into your Security Information and Event Management (SIEM) solution. They need to know answers around the following:</p><ul><li><p>Recommendations and answers to information requirements</p></li><li><p>Recommendations and notification of key decision points</p></li><li><p>Industry specific risks and adversary updates that are important</p></li><li><p>Assessed actions adversaries may use impact the organization</p></li></ul><p>This list is not comprehensive. Outputting products that support the above decisions made by leadership is just one component of what a CTI team may do. It is important to avoid tactical level information, unless it is truly critical to the decision leaders are making.</p><h3>Tactical Level</h3><p>Everyone at the tactical level find themselves in the trenches at one point or another. Day to day operations can get hectic and CTI can be one of the tools to help create order out of the chaos. Products from CTI can drive hunt operations, SIEM rules, or aid in incident response. Having an in-house CTI team can help the products be focused and specialized towards your organization. </p><p>Since CTI sits between senior leadership and the security analysts and incident responders the products need to cater to both. The same products that help answer information requirements or industry risk need to also drive the analysts' actions. There are two critical warnings for the CTI products at this level. </p><ul><li><p>Provide more than just IOC lists</p></li><li><p>Outputs need to be actionable</p></li></ul><p>How the analysis from CTI gets to the receivers at the tactical level will depend on your organization and if there are tools already available. If your team doesn't already have a platform then an open-source one like Filigran's OpenCTI may do the trick. A good Threat Intelligence Platform (TIP) will allow you to serve data up to the SIEM and to publish summarized reports, as well as help analyze all of the data being ingested. Some key outputs to focus on are:</p><ul><li><p>Concise summary of tactics, techniques, and procedures (TTPs) related to organization</p></li><li><p>IOC feed integration with SIEM</p></li><li><p>Enriching alerts and observations from SIEM</p></li><li><p>Integrate solicited feedback</p></li></ul><p>It is important to know these aren't the only outputs, but are some of the key ones. It is also important to know that some outputs can be a simple message on Slack or Teams. You don't need a multi-million dollar infrastructure or set of software licenses to get the job done. The CTI team needs to make sure the intelligence gets to the right people. </p><h2>Conclusion</h2><p>Successfully integrating threat intelligence into your organization's decisions and having it drive action is no different than taking the data from your business intelligence team to drive business decisions. The same principles carry over to threat intelligence. Threat intelligence has been around a while and has structure to it. Yet, the CTI team is often one of the last ones added to the organization. </p><p>A CTI team needs to be flexible, open to feedback, and quick to process all of the data they ingest. The team needs to be willing to look at things from a different perspective. I recommend skills in data engineering, data science/analytics, and of course cybersecurity. CTI can drive change and action in an organization. </p>]]></content:encoded></item><item><title><![CDATA[Python Tips: Dynamic Tab Completion]]></title><description><![CDATA[Do you want to have dynamic, nested, subcommands that can do tab complete? Me either.]]></description><link>https://blog.3d6564.com/p/python-tips-dynamic-tab-completion</link><guid isPermaLink="false">https://blog.3d6564.com/p/python-tips-dynamic-tab-completion</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Thu, 24 Oct 2024 22:52:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9d658e57-ccbf-4552-94e9-d562521c81f0_512x512.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intro</h2><p>Tab complete is a special feature that most cherish in a CLI application. It can improve user interaction, especially for us lazy typists. It can also reduce errors. Implementing tab complete is not difficult but what if you want it to be dynamic? What about having tab complete for commands and their subcommands? If you start thinking about this, it can quickly add to needing a lot of static methods in an application. </p><p>What I'm showing isn't a novel or new approach, but rather a high level understanding of how I implemented. I am using Python's dynamic method creation capability to generate completion methods at runtime and when certain classes get called. It rebuilds the potential tab complete options each time the class is called.</p><h2>The Problem: A Complex CLI</h2><p>Build a CLI application with the below capabilities:</p><ul><li><p><strong>Dynamic commands</strong>: commands and subcommands are not fixed and can be added to a database or file and reloaded at anytime</p></li><li><p><strong>Nested subcommands</strong>: subcommands can be nested under multiple levels</p></li><li><p><strong>Tab complete</strong>: All levels of the command hierarchy need the ability to have tab completion and the necessary arguments passed to the appropriate subcommand</p></li></ul><p>One might ask why I wanted to make such a complex CLI interface. The problem, at the core, is more about making the CLI interface appear simple, despite however complex the backend might be. I also wanted the codebase to not require any changes when someone adds commands to the application, since these are maintained by a file and/or database.</p><h2>The Solution: Dynamic Methods</h2><p>Dynamic method creation allows the generation of methods at runtime or when a class is called, depending on how it is implemented. It is using Python's ability to manipulate class attributes dynamically. What I am going to show will have some missing components. You can see the full code at in my application repo on <a href="https://www.github.com/3d6564/artifactor">github/3d6564/artifactor</a>. In the <em>menu.py</em> file you can see how it is implemented. I point to that repository because there are varying levels of complexity and what I implemented is more complex than to do a basic dynamic method for tab complete.</p><h3>Key libraries needed</h3><p>I use two libraries to accomplish everything. One is the capabilities of CLI interaction and the second is to build the method map for the commands that get loaded from a file.</p><ul><li><p><em>cmd</em></p></li><li><p><em>inspect</em></p></li></ul><h3>Key methods needed</h3><p>This is a subset of methods and what their overall purpose is for the dynamic tab complete design.</p><ul><li><p><em>dynamic_complete</em>: This is a generic function that handles the logic of tab complete for commands and subcommands. Essentially it fetches possible commands for what has been typed and returns the appropriate potential tab complete results.</p></li><li><p><em>create_complete_methods</em>: This function creates the completion methods dynamically and binds them to the respective class they are called through. If you notice, this will call a method made within itself and it will return those results passed to the <em>dynamic_complete</em> function.</p></li><li><p><em>fetch_subclasses</em> This is a basic function to retrieve the subclasses that meet a certain criteria for a class object</p></li><li><p><em>fetch_nested_submethods</em>: This function will retrieve the possible subcommands that have been made for a class and subclass called. The nested subcommands will need to be made prior to calling this function.</p></li></ul><h4>1. Dynamic Complete</h4><p>In a perfect world, the final return statement is basically never reached.</p><pre><code>def dynamic_complete(self, text, line, begidx, endidx, command_name, subcommand_fetchers):
&nbsp; &nbsp; &nbsp;"""dynamic method for tab completion of subcommands and nested subcommands."""
&nbsp; &nbsp; &nbsp;try:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; remaining_text = line[len(command_name):].strip()
&nbsp; &nbsp; &nbsp;except Exception:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return []

&nbsp; &nbsp; &nbsp;# fetch possible primary subcommands
&nbsp; &nbsp; &nbsp;possible_matches = subcommand_fetchers.get(command_name)

&nbsp; &nbsp; &nbsp;# if remaining_text is empty, suggest primary subcommands
&nbsp; &nbsp; &nbsp;if not remaining_text:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return [sc + ' ' for sc in possible_matches]

&nbsp; &nbsp; &nbsp;# split remaining_text to handle subcommands and nested subcommands
&nbsp; &nbsp; &nbsp;split_text = remaining_text.split(maxsplit=1)
&nbsp; &nbsp; &nbsp;primary_subcommand = split_text[0]
&nbsp; &nbsp; &nbsp;remaining_subtext = split_text[1] if len(split_text) &gt; 1 else ''

&nbsp; &nbsp; &nbsp;# check matching subcommand and not only a space typed
&nbsp; &nbsp; &nbsp;if ' ' not in remaining_text and primary_subcommand not in possible_matches:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; filtered_matches = [sc for sc in possible_matches if sc.startswith(remaining_text)]
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return [sc + ' ' for sc in filtered_matches]

&nbsp; &nbsp; &nbsp;# get nested subcommands if subcommand fully typed
&nbsp; &nbsp; &nbsp;if primary_subcommand in subcommand_fetchers:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; subcommands = subcommand_fetchers[primary_subcommand]
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return [sc for sc in subcommands if sc.startswith(remaining_subtext)]

&nbsp; &nbsp; &nbsp;return []</code></pre><h4>2. Create Complete Methods</h4><p>This binds the generated methods to the respective class when called.</p><pre><code>def create_complete_methods(command_name, subcommand_fetchers):
&nbsp; &nbsp; &nbsp;"""Create a complete_&lt;command_name&gt; method dynamically with support for nested subcommands."""
&nbsp; &nbsp; &nbsp;def complete_method(self, text, line, begidx, endidx):
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return dynamic_complete(self, text, line, begidx, endidx, command_name, subcommand_fetchers)
&nbsp; &nbsp; &nbsp;return complete_method</code></pre><h4>3. Fetch Subclasses</h4><p>This one is pretty straight forward.</p><pre><code>def fetch_subclasses(cls):
    """Primary subcommands under 'configure'"""
&nbsp; &nbsp; subcommands = [subclass.__name__.lower() for subclass in cls.__subclasses__()]
&nbsp; &nbsp; return subcommands + ['help']</code></pre><h4>4. Fetch Nested Submethods</h4><p>This one is very similar to <em>fetch_subclasses</em> but it does it for subcommands.</p><pre><code>def fetch_nested_submethods(cls, sub_cls):
&nbsp; &nbsp; &nbsp;"""Nested subcommands under 'configure hosts'"""
&nbsp; &nbsp; &nbsp;methods = list(set(dir(sub_cls)) - set(dir(cls)))
&nbsp; &nbsp; &nbsp;if cls == Run:
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; methods = [method for method in methods if method.startswith('do_')]
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; methods = sorted([method[3:] if method.startswith('do_') else method for method in methods])
&nbsp; &nbsp; &nbsp;return methods + ['help']</code></pre><h2>Implementation</h2><p>The implementation, when I think about it, feels like a migraine. I still want to try to find a way to simplify it, but I just don't know if there is one. Everything gets called under a <em>Cmd</em> class object. The class object dynamically builds a command map when its initialized and that is what will build the basic command map. </p><p>It should also be noted that I override the tab completion capability to remove trailing spaces using the command map made during initialization. This all happens in <a href="https://www.github.com/3d6564/artifactor">Artifactor</a>'s <em>MainCmd</em> class. Also, because of the inherent dynamic nature of the nested subcommands, I have to generate a dynamic help. </p><p>A more simple implementation of the dynamic subcommands can be viewed in the <em>Run</em> class. This one is easier to see how the dynamic tab complete works, because it does not have nested subcommand menus like the main menu of the CLI tool.</p><h2>Conclusion</h2><p>I have probably done a terrible job of explaining this. I hope I've at least inspired some thoughts on how you could implement something similar, and maybe more simple as well. Feel free to reach out if you have questions or want to collaborate and provide a more simple approach to dynamic nested subcommands in Python!</p><p>Happy coding!</p><p></p>]]></content:encoded></item><item><title><![CDATA[Lambda Layers in Windows - The Easy Way]]></title><description><![CDATA[This isn't necessarily the best way.. its more like the Temu way.]]></description><link>https://blog.3d6564.com/p/lambda-layers-in-windows-the-easy</link><guid isPermaLink="false">https://blog.3d6564.com/p/lambda-layers-in-windows-the-easy</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Mon, 02 Sep 2024 15:06:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8afbcebd-705b-49b1-b71f-7cc923d44422_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intro</h2><p>I&#8217;ll start by saying this.. This isn&#8217;t necessarily the right way to do things. This is a way I found that helped get around some environment restrictions I had run into. I was having a difficult time deploying a lambda layer in AWS to use a Markdown library in Python. I could make the layer in my personal account, but I couldn&#8217;t get it deployed in the environment I was working in.</p><p>This guide will teach you how to package a lambda function in a way that all used libraries are self-contained in the lambda. <em><strong>You may not be able to edit the lambda in the AWS user interface with this method.</strong></em> </p><h2>Core</h2><p>To dive right in we will need to have a few things setup. We need to have a folder to use as our self-contained lambda, a matching python version to our lambda runtime (Python 3.7, Python 3.10, etc.), and a virtual environment tool. A lambda can be set to a variety of runtimes and it is important to ensure your current environment is using the matching version of your runtime. Technically, it is possible for a mismatch and it to still work, as long as all associated libraries are compatible.</p><p>Your folder structure will look like this at the end:</p><pre><code>lambda_funcion.zip
|-- lambda_function.py
|-- &lt;package folders&gt;</code></pre><p><em>There is not an additional subfolder that houses the lambda_function.py and libraries. A subfolder like `python` is not needed for this method. This is different than if you use lambda layers.</em></p><p>Start by organizing your project and environment into a single folder. Once we've created everything, we will move the contents of `site-packages` to the below location, along with the actual lambda function.</p><pre><code>&lt;your project name&gt;
|-- &lt;subfolder for virtual env here&gt;
|-- &lt;subfolder for lambda function here&gt;
&#9;|-- lambda_function.py
&#9;|-- &lt;site-packages content here&gt;</code></pre><h2>Virtual Environment</h2><p>Confirm your python version matches the desired runtime:</p><pre><code>python --version</code></pre><p>Set up your virtual environment in the folder mentioned:</p><pre><code>python -m venv &lt;location of virtual env folder&gt;</code></pre><p>Switch to your virtual environment:</p><pre><code>&lt;subfolder for virtual environment&gt;\Scripts\activate</code></pre><p>You&#8217;ll know if you are in the virtual environment if your PowerShell or terminal looks similar to this:</p><pre><code>(your virtual environment)$</code></pre><p>Proceed to install all libraries:</p><pre><code>(your virtual environment)$ pip install &lt;library name&gt;</code></pre><p><em>Repeat as needed.</em></p><p>Deactivate your environment.</p><pre><code>(your virtual environment)$ deactivate</code></pre><h2>Move Site-Packages contents</h2><p>Once finished, copy the contents of the <em>site-packages</em> folder from the virtual environment to the path mentioned earlier... This can be done using commands in PowerShell or Windows Explorer. The below command, <em>cp,</em> is just an alias for <em>Copy-Item </em>in PowerShell.</p><pre><code>cp -Path .\&lt;subfolder for virtual environment&gt;\Lib\site-packages\ -Recurse -Destination .\&lt;subfolder for lambda&gt;\</code></pre><p>Once all of your files and folders are ready and you can zip it up. <strong>Zip it from within the subfolder</strong>, otherwise it will zip everything inside a folder within the <em>.zip</em>, and we do not want that. Just a reminder that your <em>.zip</em> should now look like this:</p><pre><code>lambda_funcion.zip
|-- lambda_function.py
|-- &lt;package folders&gt;</code></pre><h2>Conclusion</h2><p>The zip can just be used to replace or make a new lambda. Configure your test case or start using it! Personally, I recommend only using the User Interface for development testing. When you are ready to productionize, make sure you are taking advantage of a code pipeline. All of what has been covered can be automated through automated pipelines like DevOps Pipelines, Gitea Actions, or Git Actions. Happy coding!</p>]]></content:encoded></item><item><title><![CDATA[DevOps Tips: Version control your home lab NOW!]]></title><description><![CDATA[Or soon at least]]></description><link>https://blog.3d6564.com/p/version-control-your-home-lab-now</link><guid isPermaLink="false">https://blog.3d6564.com/p/version-control-your-home-lab-now</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Sat, 17 Aug 2024 00:46:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c330bc46-141e-43db-9d3e-f47c3b92a7bf_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Intro</h3><p>I recently realized I had no easy way to spin back up services in my home lab... and that could be a problem. I have over 30 services running at any given time. Version controlling my home lab was truly a game-changer. It has helped me start to get more organized, track changes, and know what worked without fear of things permanently breaking. My home lab work has directly led to success in my career.. and implementing practices like version control is a great example.</p><h3>Core</h3><p>So you may be wondering where to get started with version controlling your home lab. There are two main options: use your own version control service or use a service like <a href="https://www.github.com">GitHub</a>. I'm doing both. You might ask what the benefits of using your own service, so here&#8217;s a breakdown of some pros and cons.</p><p>I am specifically highlighting three positives and two negatives for each option. The lists below are not exhaustive, because there can be arguments made either way. I actually chose to do a mix of both.</p><h4>DIY version control</h4><p><strong>Pros</strong></p><ul><li><p><em>Plenty of options</em> (Gitea and Gitlab to name two)</p></li><li><p><em>Increased security</em> by keeping it local</p></li><li><p><em>Learn extra tools</em> that expand your skill set</p></li></ul><p><strong>Cons</strong></p><ul><li><p><em>Risk of data loss</em> if hardware fails</p></li><li><p><em>Learning curve</em> to implement from scratch</p></li></ul><h4>GitHub version control</h4><p><strong>Pros</strong></p><ul><li><p><em>Less configuration</em> to manage</p></li><li><p><em>Still secur</em>e with proper configurations</p></li><li><p><em>Public portfolio</em> of skills for future employers</p></li></ul><p><strong>Cons</strong></p><ul><li><p><em>Risk of exposing private/sensitive</em> with misconfigurations</p></li><li><p><em>Less control</em> over data</p></li></ul><p>After deciding on the version control method, I recommend using Portainer to manage any containerized services.  Portainer simplifies the process and integrates well with Git repositories. You don't need to setup your own runners to deploy code, for example.</p><h4>Organize your repo</h4><p>It is important to use a decent structure. Below is a starter structure to organize your home lab. I recommend at least setting up each service group within its own directory. Inside each folder, include the <code>.env</code> needed, and <strong>add your </strong><code>.env</code><strong> to your </strong><code>.gitignore</code><strong>!!!!</strong></p><pre><code>git_repo
|-- &lt;service name&gt;
&#9;|-- .env
&#9;|-- docker-compose.yml
|-- &lt;service name&gt;
&#9;|-- .env
&#9;|-- docker-compose.yml</code></pre><h4>Setting up Portainer with Git</h4><ol><li><p>Setup Portainer CE (If not already configured).</p><pre><code>docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest</code></pre></li><li><p>Complete Portainer setup steps available at their docs <a href="https://docs.portainer.io/start/install-ce/server/docker/linux">here</a>.</p></li><li><p>Link Your Git Repository</p><ol><li><p>Navigate to <strong>Stacks</strong> and select <strong>Add Stack</strong></p></li><li><p>Choose <strong>Repository</strong> as deployment method, and enter URL the Git repo is accessible from</p></li><li><p>For authentication, set up a <strong>Personal Access Token</strong> instead of using your Git password. This is more secure.</p></li></ol></li><li><p>Automatically Sync</p><ol><li><p>Set Portainer to automatically sync at the desired interval for your needs (10m, 1h, 8h, 12h, etc)</p></li></ol></li></ol><h3>Conclusion</h3><p>These basic steps should get you going with some ideas to improve your home lab's resiliency. The skills you learn with this can be beneficial in DevOps, system administration, and almost any developer role.</p>]]></content:encoded></item><item><title><![CDATA[Ditch Manual Starts: Automate Docker-Compose with Systemd]]></title><description><![CDATA[Intro I finally got tired of having to SSH into one of my service virtual machines to start the Docker-Compose file after every reboot.]]></description><link>https://blog.3d6564.com/p/ditch-manual-starts-automate-docker</link><guid isPermaLink="false">https://blog.3d6564.com/p/ditch-manual-starts-automate-docker</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Mon, 29 Jul 2024 12:29:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9c93bcd7-2b2e-4846-995c-5e7e332f386c_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Intro</h3><p>I finally got tired of having to SSH into one of my service virtual machines to start the Docker-Compose file after every reboot. This frustration led me down a rabbit hole of researching the best methods to automate this. Along the way, I discovered a lot controversy over the best method to launch and manage services at system initialization.. A lot of developers seem to have issue with `systemd` over other methods, and even if you should use a task scheduler like `cron` or not. While I don't have an answer to settle that debate for you, I can share what ended up working for me.</p><h3>Background</h3><p>I ventured down this path because I've been developing an open-source artifact gathering tool and I needed to automate more of my services to help testing. I've found a ton of uses for <em><a href="https://github.com/khast3x/redcloud">redcloud</a></em>, modified Portainer instance for red teaming. It allows easily spinning up redteam tools in a container. I found this useful because I have been frequently restarting hosts in my homelab while testing my artifact tool.</p><h3>Creating the Systemd Service File</h3><p>To automate the startup of my Docker-Compose services, I created a systemd service file. Here&#8217;s how I did it:</p><ol><li><p><strong>Create the service file:</strong></p><pre><code>sudo nano /etc/systemd/system/redcloud.service</code></pre></li><li><p><strong>Contents of the service file:</strong></p><pre><code>[Unit]
Description=Redcloud Docker Compose Service
Requires=docker.service
After=docker.service
BindsTo=docker.service
ReloadPropagatedFrom=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/bash -c "docker-compose -f /path/to/compose/redcloud/redteam-compose.yml up -d --build"
ExecStop=/bin/bash -c "docker-compose -f /path/to/compose/redcloud/redteam-compose.yml down"

[Install]
WantedBy=multi-user.target
RequiredBy=network.target
Also=docker.service</code></pre><p>I won't claim this is the best way to do it, and I welcome any recommendations. However, this configuration worked for me quickly after a couple of other attempts.</p></li></ol><h3>Enabling and Testing the Service</h3><p>Here we enable the service and then just quickly test it worked.</p><ol><li><p><strong>Reload the systemd daemon:</strong></p><pre><code>sudo systemctl daemon-reload</code></pre></li><li><p><strong>Start the service:</strong></p><pre><code>sudo systemctl start redcloud</code></pre></li><li><p><strong>Check the service status:</strong></p><pre><code>sudo systemctl status redcloud</code></pre></li><li><p><strong>Reboot host:</strong></p><pre><code>sudo reboot</code></pre></li><li><p><strong>Confirm the service ran and container is running:</strong></p><pre><code>sudo systemctl status redcloud
sudo docker ps</code></pre></li></ol><h3><strong>Conclusion</strong></h3><p>That's it! Setting up the service was pretty straightforward. While there may be different or better ways to automate Docker-Compose startups, this method worked reliably for my needs. If you have any suggestions or improvements, I'd love to hear them!</p>]]></content:encoded></item><item><title><![CDATA[Reviving an old GPU: Setting up Ollama and Llama 3.1 in a homelab]]></title><description><![CDATA[I&#8217;ve been wanting to use an old GPU I have had sitting around for a while and the recent release of Llama 3.1&#8230; I think I have found a reason to blow off the dust.]]></description><link>https://blog.3d6564.com/p/reviving-an-old-gpu-setting-up-ollama</link><guid isPermaLink="false">https://blog.3d6564.com/p/reviving-an-old-gpu-setting-up-ollama</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Thu, 25 Jul 2024 21:42:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dddc8ee4-7179-49fb-97f3-6640c08355cd_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been wanting to use an old GPU I have had sitting around for a while and the recent release of Llama 3.1&#8230; I think I have found a reason to blow off the dust. </p><h2>Hardware</h2><p>The hardware on this server is not ideal but this list will hopefully help you know what may be needed.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.3d6564.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">3d6564 is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ul><li><p>CPU: Intel i7-3770 @ 3.4GHz</p></li><li><p>RAM: 32GB (4x8GB) DDR3 1333Mhz</p></li><li><p>GPU: GTX 1080 with Driver 535.183.01</p></li></ul><p>As you can see, it isn&#8217;t much but it also used to be a pretty good gaming PC! </p><h2>Setup</h2><p>To get started, I am using Portainer to help orchestrate my <code>docker-compose.yml</code>. This allows me to easily manage multiple containers across a fleet of virtual machines. This particular server is only running Plex and now Ollama. </p><ol><li><p>Update packages</p><pre><code><code>sudo apt update</code></code></pre></li><li><p>Install NVIDIA drivers and NVIDIA</p><pre><code><code>sudo apt install nvidia-driver-535</code></code></pre></li><li><p>Upgrade system and reboot</p><pre><code><code>sudo apt upgrade
sudo reboot</code></code></pre></li><li><p>Install NVIDIA toolkit</p><pre><code><code>sudo apt-get install -y nvidia-container-toolkit</code></code></pre></li><li><p>Restart docker</p><pre><code><code>sudo systemctl restart docker</code></code></pre></li></ol><p>Your mileage may vary system to system and you may need to install <code>nvidia-docker2</code> and you may need to use the <code>nvidia-container-toolkit-base</code> instead of what I have above. If you get the error below then you may need to reinstall the NVIDIA drivers.</p><blockquote><pre><code>Failed to initialize NVML: Driver/library version mismatch</code></pre></blockquote><h2>Docker Compose</h2><p>So I talked about how I orchestrate my compose files. I managed to get it to work in a single compose file. I have other things such as a reverse proxy configured that allow me to add SSL certificates on top of these containers. This is mostly pulled from Open WebUI&#8217;s github <a href="https://github.com/open-webui/open-webui/tree/main">here</a>.</p><pre><code>---
version: "3.8"
services:
  ollama:
    volumes:
      - /opt/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: ["gpu"]
  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    volumes:
      - /opt/ollama/webui:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 8080:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped</code></pre><h2>Conclusion</h2><p>Thank you for surviving this long. With the above, I was able to get Llama 3.1 running on my GTX 1080 and it is actually quite fast. I&#8217;ve been a big user of OpenAI&#8217;s ChatGPT 4o and speed wise, this is a bit faster in its responses. </p><p>I did notice some differences. I had to prompt Llama 3.1 to give me code as outputs, otherwise it leaned towards text output. I will post some more in this space as I test it more.</p><p>Thanks all!</p><p>Cory</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.3d6564.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">3d6564 is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A high level intro]]></title><description><![CDATA[Exploring the possibilities]]></description><link>https://blog.3d6564.com/p/a-high-level-intro</link><guid isPermaLink="false">https://blog.3d6564.com/p/a-high-level-intro</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Sun, 14 Jul 2024 00:33:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5086f2b3-bb49-4df6-8a49-2aa9691a8a23_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone!</p><p>I am excited to use this platform to talk about major projects I&#8217;m working on. One of the things I have been thinking about the past few months is what can I do to leave a positive impact on the tech and cyber community. I have a passion for programming, data engineering, cloud architecture, 3d printing, and mentoring others. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.3d6564.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for checking out my blog! Subscribe to support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I wear a few different hats and my experience comes from time working in Military Intelligence and Cybersecurity for the US Army, as well as being a Solution Architect at a software startup and now a Solution Architect at Caterpillar. I have a variety of experience ranging from database administration, programming in Python and Java, data science and machine learning, data engineering, and devops.</p><p>I have started an open source project on github (<a href="https://github.com/3d6564/artifactor">artifactor</a>) and I am hoping to get to a place where it helps fill a gap I&#8217;ve seen after talking with many friends and partners. Hopefully I am not just making up the gap either&#8230; lol. </p><p>Artifactor is meant to help scan any number of hosts in parallel for incident response artifacts using basic SSH/WinRM connections.. There are plenty of tools out there, but I don&#8217;t think any are as lightweight and easy to use. The tool is flexible and adaptable.. and will continue to be so. It can gather any artifact retrievable from SSH or WinRM commands&#8230; so just about anything. </p><p>Thanks for stopping by and I look forward to sharing more.</p><p>Cory</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.3d6564.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading 3d6564! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is 3d6564.]]></description><link>https://blog.3d6564.com/p/coming-soon</link><guid isPermaLink="false">https://blog.3d6564.com/p/coming-soon</guid><dc:creator><![CDATA[Cory Robinson]]></dc:creator><pubDate>Sat, 13 Jul 2024 23:41:40 GMT</pubDate><content:encoded><![CDATA[<p>This is 3d6564.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.3d6564.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.3d6564.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>