Hi, I'm Diego Cabello. This website is my public-facing professional portfolio for tech job applications. I also sometimes host files to send to people on this website. Currently I work as a software engineer at XRGeneralHospital in Ann Arbor, MI.
This website was designed with a minimalist aesthetic, and more about those design choices are detailed in these essays. Although, I am also very capable at designing more modern and complicated web interfaces that look good (see).
Coding Projects
Spotify Lyrics Scraper
During quarantine I wanted to see the lyrics to the songs I was listening to. This was before spotify added lyrics. I am keeping this in the portfolio even though I made it over four years ago because it was pretty inventive, for me being in high school. Github How it Works Get song I am listening to using offical Spotify API and auth Check if song has had lyrics saved before, if so then return those and break Send request to official Genius API with song title and artist Save text response and return
Twitter Tools
I am building a suite of tools to automate Twitter functions outside the paid API. Bookmarks Scraper (July 2024) Github I wanted to download all my bookmarked images and posts from Twitter and index them, but it costed $100/mo to do this with the official Twitter API. So, I built a cost-effective workaround. The Method Scraping the data log into twitter on the browser, go to the page you want to scrape from, and locate one of the GET method connections for https://x.com/i/graphql/$PAGE in the browser network tab copy the cookies and request headers and paste them as arguments for a curl command run the command in python using the subprocess library within a while loop write the JSON responses to a text file for later parsing extract the bottom cursor from the last response and then use that as an argument for the next iteration this will run about 90 times returning 20 posts each until it times out and blocks you Analyzing the data parse each response for all the information about the posts, their authors, and media content store parsed data in an sqlite database download the images or videos using the python requests library Areas for Expansion and Improvement This scraper is an ongoing project with potential research-level scaleability (as the paid API effectively limits a lot of researchers). Improvements include: improved cookie management and potential...
Sculblog
Design Sculblog is written in Python and built on top of pre-existing technologies - Debian, Apache, HTML, CSS, PHP, SQLite, browsers. These technologies are established, reliable, and easily customizable, perfect for building a lightweight blogging framework on top of. Versioning Sculblog 0.1.6 is for an Apache server running on Debian. Future versions will support Nginx Installation On a fresh Debian instance, run install.sh, or run source curl http://diegocabello.com/sculblog/install.sh. Create a Python venv in your home directory using python -m venv sculblog Run source sculblog/bin/activate to activate the venv Run pip install sculblog Features Root Directory Structure All posts are written in Markdown or HTML, are converted to html if neccassary, and put in the database. The files in the server root directory /var/www/html/ are linked to templates stored in the 'resources' folder in the server directory. The templates connect to the database. Templates are written in php by default The database is stored in the 'database' folder in the server directory. Compared to alternatives like Hugo, this configuration is much simpler and doesn't require learning a whole new scripting language. Optimized Content Serving
LENTS (Local Extenable Nested Tagging System) (unfinished)
Dynamic Generative Interface (unfinished)
Embedded Work (Under Construction)
iPhone Whisper Integration (Under Construction)
Essays
Sculblog Design Choices
My previous work with web development, including with React and Next.js, was not exhaustive, but it was enough for me to realize that what these frameworks are usually used to build is not what I think the internet should be. These "interactive web applications" that have been popular of late have detracted from what the internet was originally intended to be: a codified protocol to share information and documents between computers.1 These "interactive web applications", with their bells, whistles, fancy animations, scroll-hijacking, chatbots, and 3d effects, are a huge waste of effort and generate nowhere near as much true economic value as much effort is poured into it. So, I want to build a framework that brings the internet back to what it was originally intended to be; and that focuses less on presentation and interactivity, and more on quickly and easily categorizing, storing, and transmuting information. I want to see people write. The internet for its existence thus far has been a catalyst for niche ideologies and groups to form and then spread into the mainstream (looksmaxxing, peating, microplastics awareness, to quickly name a few). (If I want people to write more, it might more worth my efforts to make a syntactical tool to make ideas expressed as consisely as grammatically possible without losing any information... And perhaps a suite to determine if something is "worth reading" or not according to ...
What I learned from my First Startup
This essay will be about Wonder Clothing. Currently I am still writing it