Reverse order feeds show me a truth

brown chicken on brown sand

I recently did something crazy – I reversed the order or my RSS Feed Reader, so I’m not seeing the newest items first, but the oldest. I did this in a single folder – Web Comics, so I could finally catch up with every artist’s evolution and since comics are easier reads, I’ll be able to pound through a 1000 unread items out of the 8000 in my stack right now.

What I didn’t anticipate is that the setting is app-wide. So now every list I’m seeing is in the Old to New order.

Yesterday, I read a post from Sophie Haskins figuring out which virtualization solution to go with for her home setup. She played with a few options (and skipped the one I wanted to read about – Proxmox) and settled with running Ubuntu as the Host and minikube on top. I saw that she linked to a tweet and I wanted to ask why she had skipped Proxmox and so I went over. That’s when I realized that the post is from 2017 so the conversation is long gone.

After I learned that I’d reversed the order of all my feeds, I forgot about it.

Just now, I was reading a post by Vicki Boykis, where she’s talking about how Pinterest sends her emails to entice her back to their site. What was odd was that she was talking about it in context of Halloween. That threw me off, till I realized that the post I’m reading is from November 2013!

From social sites trying to pull people back to their platforms for (as Vicki puts it) click$ to virtualization solutions for your Home Lab… the more things change, the more they stay the same! Be it 2013, 2017, or 2022, we’re looking a the same issues, aren’t we?

(Sorry about the click bait title. I was having a hard time figuring out what the title should be about this Musing. Just went with this one. Recommend a better title please?)

Streaks

white book

If you came here to read about a fitness streak, you’ll be sorely disappointed. I’ve been on a different kind of streak lately – I’ve been reading a lot of RSS feeds. Specifically, I’ve been spending time going through a lot of webcomics.

See, I love reading RSS feeds. I definitely overload every feed reader I’ve used, but none so much as I’ve overloaded my current one – an app on iOS called Fiery Feeds. I have about 16k unread items on here (don’t judge me).

Out of these, about three thousand are webcomics. So I’m starting from there. I pick up an unread feed and blaze through it. Usually, that’s 60-100 items that I end up marking as read in a day. At this rate, I’ll be current in a couple of months. Of course, I’m focusing on webcomics because they’re super easy to read, with not a lot of context needed, and a quick read time.

But that’s not all. Comics are able to portray the ethos of their time very easily. Whether I’m reading a slice of life comic from a few years ago, where the biggest topic was the latest Starbucks winter theme, or I’m reading the latest xkcd, talking, of course, as everyone else is, about COVID-19, it becomes very easy to see the timeline and to consume the news of the day through comics. Of course, I also love reading more serious endeavors like Gaia and Slack Wyrm, which have enduring storylines, recurring characters, and a vein you kinda have to hold on to, preferably by reading from the first comic. These are just plain fun to read and follow along!

While reading may be all fun, I’m sure writing and making webcomics is not. All the hard work of describing the scene, the props, the clothing, is already done by the artist, and I just have to consume all those visuals. Compared to essays, where I have to read through to understand the story from top to bottom, and where my attention is definitely pulled away before I’d like it to, comics are easy to consume, though I’m sure the effort that goes into a good essay is perhaps less than that which goes into a good comic.

Now, once I get done with the comics, I’d like to continue reading my RSS feeds. I follow a lot of personal feeds, mostly from random strangers I’ve encountered online. It feels great to be in a space where I can just read a person’s diary entry, with some of their personal thoughts splashed on the Internet for me to see. Besides the occasional rant, most people put good thoughts on their websites, and it feels great to read those positive thoughts.

One of the reasons these “personals” are easy to read is because, frankly, of twitter. A lot of folks try to cross-post from their blogs to twitter and other microblogging sites. This means they have to stick to a length limit, and most of them try to get done with their thoughts in about 30 words or less. I wouldn’t say that’s the real average, because I’ve never measured. But birdbrained that we are, reading more than those has often ended in my attention getting pulled away, so people who post 30 words or less and express themselves fully still, are aces in my book!

But once I’m done catching up with the personals, of course I’d like to read more serious, longer stuff, which has been piling up. Most of the time, I’ll read a few paragraphs and either abandon the writing for being too dry, or shove it into Instapaper to catch up with it in a few years. My “long articles” section is at about five thousand entries, with writing from AI Weirdness, Linux Journal Blogs, and InkMango, to name a few. One of these days, once my habit is built and my streaks have left me with no webcomics to indulge in, I’ll dive into these heavier writings, and hopefully come out more educated. For now though, laughs are enough!

Notes on setting up Freedbin

Here are some notes on how to setup Rachel Sharp‘s Freedbin, which is a docker version of the popular Feedbin RSS feed reader.

I had some trouble setting this up on my Windows 10 machine. A lot of issues I faced had to do with setup and environmental variables. I don’t think I faced any real issues due to my host being Windows, other than the terrible thing that Windows 10 itself is. Anyways.

First of all, I had an already running version of postgres for other docker images, so there was a conflict I was not able to resolve, since Rachel’s docker compose file calls its images directly from Docker Hub which are not easily configurable. If someone can guide me to using the same postgres instance for two docker projects, that would be great! Right now, I have two docker containers running postgres.

So, (real) first of all, I downloaded the repo to my own machine to make modifications.

To begin, in the docker-compose.yml, I changed the name of the service from postgres to postgresfeedbin and changed the port to 5433 instead of the 5432 which was already in use.

I also changed the app image from rachsharp/feedbin to a local name freedbin_app and added the build line, so I could build the changes I’m putting in.

I added the restart unless-stopped line to ensure my containers never stop! 🙂

There’s a discussion on the github repo about replacing Postlight’s Mercury service with our own open source version of the same. Postlight has sunset their own servers, so it makes sense to use our own. One alternative is to use Feedbin’s own extract service, but that is available only in the newer version of Feedbin, which Rachel’s docker container doesn’t use. Instead, I already had a docker image of Mercury from the docker hub that I’ve setup for tt-rss and other projects, which I just connected to, using the MERCURY_HOST environment variable. In this setup, the MERCURY_API_KEY doesn’t do anything. Mercury just ignores it and it seems that so does Feedbin.

All of the above are summarized here, as part of the docker-compose.yml file –

app:
    # image: rachsharp/feedbin
    image: freedbin_app
    build: .
    restart: unless-stopped
    environment:
      - MERCURY_HOST=http://192.168.99.100:3000
      - MERCURY_API_KEY=abcd
      - SECRET_KEY_BASE=abcd
      - POSTGRES=postgresfeedbin
      - POSTGRES_USERNAME=feedbiner
      - POSTGRES_PASSWORD=feedbiner
      - PGPASSWORD=feedbin
      - DATABASE_URL=postgres://feedbiner:feedbiner@postgresfeedbin:5433/feedbin_production
[...]
  postgresfeedbin:
    image: postgres
    restart: unless-stopped
    command: -p 5433
    environment:
      - POSTGRES_USER=feedbiner
      - POSTGRES_PASSWORD=feedbiner
    ports:
      - 5433:5433
    expose:
      - 5433
    volumes:
      - postgres_data_feedbin:/var/lib/postgresql/data
volumes:
  redis_data:
  postgres_data_feedbin:

I further had to make changes to the startup_script.sh file as here –

if psql -h postgresfeedbin -p 5433 -U feedbin -lqt | cut -d \| -f 1 | grep -qw feedbin_production; then

As seen, I’ve just pointed it to the new service name and port.

At this point, the service was able to start. I was able to create an account and get in and add feeds. However, I follow a lot of feeds and importing an OPML file makes good sense for me. But, the import settings page was failing to import due to a failed AWS config. I looked up solutions and one way around is just to disable a connector called CarrierWave, which connects to AWS. Guess what gets disabled if you disable CarrierWave? The import/export page.

So, I went about creating an S3 bucket on AWS, getting credentials, and making the S3 bucket publicly accessible. I don’t know why this is the case. Perhaps if we use a newer version of Feedbin, these issues will not pop up, but in Rachel’s version, this is the case, so I went with it.

After I made my S3 bucket and got the AWS credentials, I added them to the Dockerfile as here. The variables are already there, just need to be filled up –

ENV FONT_STYLESHEET=https://fonts.googleapis.com/css?family=Crimson+Text|Domine|Fanwood+Text|Lora|Open+Sans RAILS_ENV=production RACK_ENV=production AWS_ACCESS_KEY_ID='my_key_id' AWS_S3_BUCKET='my_bucket_name' AWS_SECRET_ACCESS_KEY='sooooo_secret!' DEFAULT_URL_OPTIONS_HOST=http://localhost FEEDBIN_URL=http://localhost PUSH_URL=http://example.com RAILS_SERVE_STATIC_FILES=true

There’s one more catch. The Feedbin code uses its own version of CarrierWave called CarrierWave Direct, which defaults to try to use the ‘us-east-1’ region for AWS. If your bucket is there, you’re fine. Mine is in ‘us-west-1’, so I had to go into the /config/initializers/carrierwave.rb file and change the following to add my region –

config.fog_credentials = {
      provider: "AWS",
      aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"],
      aws_secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
      region: 'us-west-1',
}

Finally, I am ready to build and deploy. Running the following command –

docker-compose build

You’ll notice a new image in your docker images list –

$ docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
freedbin_app                  latest              20a0334cd11c        30 minutes ago      1.27GB

and now you can deploy –

docker-compose up

It takes a while, as Rachel mentions somewhere, but all services come up perfectly, and I was able to import my OPML file. I noticed that the S3 bucket holds the lone OPML file, so perhaps it won’t cost me any money? Eventually, once I know that the import is done, I’ll go in and delete the bucket.

Big, big thanks to Rachel Sharp for creating Freedbin. It’s a great way to get started on Feedbin and while I was working on setting this up, I learnt how to use docker, created my first Docker container and uploaded my first project to Docker Hub. Hopefully, I’ll be able to build Freedbin from scratch using the latest Feedbin code and Feedbin’s extract service, and using the principles set down by Rachel.