The DevOps Blog
https://blog.lazkani.io
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
8806 lines
349 KiB
8806 lines
349 KiB
#+STARTUP: content
|
|
#+AUTHOR: Elia el Lazkani
|
|
#+HUGO_BASE_DIR: ../.
|
|
#+HUGO_AUTO_SET_LASTMOD: t
|
|
|
|
* Custom Pages
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_CUSTOM_FRONT_MATTER: :noauthor true :nocomment true :nodate true :nopaging true :noread true
|
|
:EXPORT_HUGO_MENU: :menu false
|
|
:EXPORT_HUGO_SECTION: .
|
|
:EXPORT_HUGO_WEIGHT: auto
|
|
:END:
|
|
** Not Found
|
|
:PROPERTIES:
|
|
:EXPORT_FILE_NAME: not-found
|
|
:EXPORT_HUGO_LASTMOD: 2021-07-04
|
|
:EXPORT_DATE: 2020-02-08
|
|
:CUSTOM_ID: not-found
|
|
:END:
|
|
|
|
*** 404 Not Found
|
|
|
|
Oops... We don't know how you ended up here.
|
|
|
|
There is nothing here to look at...
|
|
|
|
Head back over [[/][home]].
|
|
|
|
** Forbidden
|
|
:PROPERTIES:
|
|
:EXPORT_FILE_NAME: forbidden
|
|
:EXPORT_HUGO_LASTMOD: 2021-07-04
|
|
:EXPORT_DATE: 2020-06-05
|
|
:CUSTOM_ID: forbidden
|
|
:END:
|
|
|
|
*** 403 Forbidden
|
|
|
|
Naughty naughty !
|
|
|
|
What brought you to a forbidden page ?
|
|
|
|
Take this =403 Forbidden= and head over the [[/][main site]].
|
|
|
|
* Pages
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_CUSTOM_FRONT_MATTER: :noauthor true :nocomment true :nodate true :nopaging true :noread true
|
|
:EXPORT_HUGO_MENU: :menu main
|
|
:EXPORT_HUGO_SECTION: pages
|
|
:EXPORT_HUGO_WEIGHT: auto
|
|
:END:
|
|
** About
|
|
:PROPERTIES:
|
|
:EXPORT_FILE_NAME: about
|
|
:EXPORT_HUGO_LASTMOD: 2021-07-04
|
|
:EXPORT_DATE: 2019-06-21
|
|
:CUSTOM_ID: about
|
|
:END:
|
|
|
|
*** Who am I ?
|
|
|
|
I am a DevOps cloud engineer with a passion for technology, automation, Linux and OpenSource.
|
|
I've been on Linux since the /early/ 2000's and have contributed, in some small capacity, to some open source projects along the way.
|
|
|
|
I dabble in this space and I blog about it. This is how I learn, this is how I evolve.
|
|
|
|
*** Contact Me
|
|
|
|
If, for some reason, you'd like to get in touch you have sevaral options.
|
|
- Find me on [[https://libera.chat/][libera]] in ~#LearnAndTeach~.
|
|
- Email me at ~blog[at]lazkani[dot]io~
|
|
|
|
If you use /GPG/ and you should, my public key is ~2383 8945 E07E 670A 4BFE 39E6 FBD8 1F2B 1F48 8C2B~
|
|
** FAQ
|
|
:PROPERTIES:
|
|
:EXPORT_FILE_NAME: faq
|
|
:EXPORT_HUGO_LASTMOD: 2021-07-04
|
|
:EXPORT_DATE: 2021-07-04
|
|
:CUSTOM_ID: faq
|
|
:END:
|
|
|
|
*** What is this ?
|
|
|
|
This is my humble blog where I post things related to DevOps in hope that I or someone else might benefit from it.
|
|
|
|
*** Wait what ? What is DevOps ?
|
|
|
|
[[https://duckduckgo.com/?q=what+is+devops+%3F&t=ffab&ia=web&iax=about][Duckduckgo]] defines DevOps as:
|
|
|
|
#+BEGIN_QUOTE
|
|
DevOps is a software engineering culture and practice that aims at unifying
|
|
software development and software operation. The main characteristic of the
|
|
DevOps movement is to strongly advocate automation and monitoring at all
|
|
steps of software construction, from integration, testing, releasing to
|
|
deployment and infrastructure management. DevOps aims at shorter development
|
|
cycles, increased deployment frequency, and more dependable releases,
|
|
in close alignment with business objectives.
|
|
#+END_QUOTE
|
|
|
|
In short, we build an infrastructure that is easily deployable, maintainable and, in all forms, makes the lives of the developers a breeze.
|
|
|
|
*** What do you blog about ?
|
|
|
|
Anything and everything related to DevOps. The field is very big and complex with a lot of different tools and technologies implemented.
|
|
|
|
I try to blog about interesting and new things as much as possible, when time permits.
|
|
|
|
*** Does this blog have *RSS* ?
|
|
|
|
Yup, here's the [[/posts/index.xml][link]].
|
|
|
|
* Posts
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_SECTION: posts
|
|
:END:
|
|
** Backup :@backup:
|
|
*** DONE BorgBackup :borg:borgbackup:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2020-01-30
|
|
:EXPORT_DATE: 2020-01-30
|
|
:EXPORT_FILE_NAME: borgbackup
|
|
:CUSTOM_ID: borgbackup
|
|
:END:
|
|
|
|
I usually lurk around *Freenode* in a few projects that I use, can learn from and/or help with. This is a great opportunity to learn new things /all the time/.
|
|
|
|
This story is familiar in that manner, but that's where similarities diverge. Someone asked around =#Weechat= a question that caught my attention because it was, sort of, out of topic. The question was around how do you backup your stuff ?
|
|
#+hugo: more
|
|
|
|
I mean if I were asked that, I would've mentioned revision controlled off-site repositories for the code that I have.
|
|
For the personal stuff on the other hand, I would've admitted simple rudimentary solutions like =rsync=, =tar= and external drives.
|
|
So I was sort of happy with my backup solution, it has worked. Plain and simple.
|
|
|
|
I have to admit that, by modern standards it might not offer the ability to go back in time to a certain point.
|
|
But I use /file systems/ that offer /snapshot/ capabilities. I can recover from previous snapshots and send them somewhere safe.
|
|
Archiving and encrypting those is not a simple process, wish it was. That limits storage possibilities if you care to keep your data private.
|
|
|
|
But if you know me, you'd know that I'm always open to new ways of doing things.
|
|
|
|
I can't remember exactly the conversation but the name *BorgBackup* was mentioned (thank you however you are). That's when things changed.
|
|
|
|
**** BorgBackup
|
|
[[https://www.borgbackup.org/][Borg]] is defined as a
|
|
|
|
#+BEGIN_QUOTE
|
|
Deduplicating archiver with compression and encryption
|
|
#+END_QUOTE
|
|
|
|
Although this is a very accurate and encompassing definition, it doesn't really show you how /AWESOME/ this thing is.
|
|
|
|
I had to go to the docs first before I stumbled upon this video.
|
|
|
|
#+BEGIN_EXPORT md
|
|
[](https://asciinema.org/a/133292)
|
|
#+END_EXPORT
|
|
|
|
It can be a bit difficult to follow the video, I understand.
|
|
|
|
This is why I decided to write this post, to sort of explain to you how *Borg* can backup your stuff.
|
|
|
|
**** Encryption
|
|
Oh yeah, that's the *first* thing I look at when I consider any suggested backup solution. *Borg* offers built-in /encryption/ and /authentication/. You can read about it in details in the [[https://borgbackup.readthedocs.io/en/stable/usage/init.html#encryption-modes][docs]].
|
|
|
|
So that's a check.
|
|
|
|
**** Compression
|
|
This is another thing I look for in a suggested backup solution. And I'm happy to report that *Borg* has this under the belt as well.
|
|
*Borg* currently supports /LZ4/, /zlib/, /LZMA/ and /zstd/. You can also tune the level of compression. Pretty neat !
|
|
|
|
**** Full Backup
|
|
I've watched a few videos and read a bit of their documentation and they talk about *FULL BACKUP*.
|
|
Which means every time you run *Borg*, it will take a full backup of your stuff. A full backup at that point in time, don't forget.
|
|
The implication of this is that you have a versioned list of your backups, and you can go back in time to any of them.
|
|
|
|
Yes, you read that right. *Borg* does a full backup every time you run it. That's a pretty neat feature.
|
|
|
|
If you're a bit ahead of me, you were gonna say woooow there bud ! I have *Gigabytes* of data, what do you mean *FULL BACKUP*, you keep saying *FULL BACKUP*.
|
|
|
|
I mean *FULL BACKUP*, wait until you hear about the next feature.
|
|
|
|
**** Deduplication
|
|
Booyah ! It has deduplication. Ain't that awesome. I've watched a presentation by the project's original maintainer explain this.
|
|
I have one thing to say. It's pretty good. How good, you may ask ?
|
|
|
|
My answer would be, good enough to fool me into thinking that it was taking snapshots of my data.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
-----------------------------------------------------------------------------
|
|
Original size Compressed size Deduplicated size
|
|
All archives: 34.59 GB 9.63 GB 1.28 GB
|
|
Unique chunks Total chunks
|
|
Chunk index: 47772 469277
|
|
#+END_EXAMPLE
|
|
|
|
It wasn't until I dug in deeper into the matter that I understood that it was a full backup and the deduping taking care of the rest.
|
|
|
|
**** Check
|
|
*Borg* offers a way to vefiry the consistency of the repository and the archives within. This way, you can make sure that your backups haven't been corrupted.
|
|
|
|
This is a very good feature, and a must in my opinion from a backup solution. *Borg* has /YOU/ covered.
|
|
|
|
**** Restore
|
|
A backup solution is nothing if you can't get your data backup.
|
|
*Borg* has a few ways for you to get your data.
|
|
You can either create an /archive/ file out of a backup. You can export a file, a directory or the whole directory tree from a backup.
|
|
You can also, if you like, mount a backup and get stuff out.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Mounting a *Borg* backup is done using /fuse/
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
**** Conclusion
|
|
*Borg* is a great tool for backup. It comes in an easily installable self-contained binary so you can use it, pretty much, anywhere giving you no excuse /whatsoever/ not to use it.
|
|
Their documentation is very good, and *Borg* is easy to use.
|
|
It offers you all the features you need to do off-site and on-site backups of all your important data.
|
|
|
|
I'll be testing *Borg* moving forward for my data. I'll make sure to report back anything I find, in the future, related to the subject.
|
|
|
|
*** DONE Automating Borg :borgmatic:borgbackup:borg:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2020-02-02
|
|
:EXPORT_DATE: 2020-02-02
|
|
:EXPORT_FILE_NAME: automating-borg
|
|
:CUSTOM_ID: automating-borg
|
|
:END:
|
|
|
|
In the previous blog post entitle [[#borgbackup]], I talked about *borg*.
|
|
If you read that post, you would've noticed that *borg* has a lot of features.
|
|
With a lot of features come a lot of automation.
|
|
|
|
If you were thinking about using *borg*, you should either make a /simple cron/ or you're gonna have to write an elaborate script to take care of all the different steps.
|
|
|
|
What if I told you there's another way ? An easier way ! The *Borgmatic* way... What would you say ?
|
|
#+hugo: more
|
|
|
|
**** Borgmatic
|
|
*Borgmatic* is defined on their [[https://torsion.org/borgmatic/][website]] as follows.
|
|
|
|
#+BEGIN_QUOTE
|
|
borgmatic is simple, configuration-driven backup software for servers
|
|
and workstations. Protect your files with client-side encryption.
|
|
Backup your databases too. Monitor it all with integrated third-party
|
|
services.
|
|
#+END_QUOTE
|
|
|
|
If you go down to it, *borgmatic* uses *borg*'s /API/ to automate a list of configurable /tasks/.
|
|
This way, it saves you the trouble of writing your own scripts to automate these steps.
|
|
*Borgmatic* uses a /YAML/ configuration file. Let's configure a few tasks.
|
|
|
|
**** Location
|
|
First, let's start by configuring the locations that *borg* is going to be working with.
|
|
|
|
#+BEGIN_SRC yaml
|
|
location:
|
|
source_directories:
|
|
- /home/
|
|
|
|
repositories:
|
|
- user@backupserver:sourcehostname.borg
|
|
|
|
one_file_system: true
|
|
|
|
exclude_patterns:
|
|
- /home/*/.cache
|
|
- '*.pyc'
|
|
#+END_SRC
|
|
|
|
This tells *borg* that we need to backup our =/home= directories excluding a few patterns.
|
|
Let's not forget that we told *borg* where the repository is located at.
|
|
|
|
**** Storage
|
|
We need to configure the storage next.
|
|
|
|
#+BEGIN_SRC yaml
|
|
storage:
|
|
# Recommended
|
|
# encryption_passcommand: secret-tool lookup borg-repository repo-name
|
|
|
|
encryption_passphrase: "ReallyStrongPassphrase"
|
|
compression: zstd,15
|
|
ssh_command: ssh -i /path/to/private/key
|
|
borg_security_directory: /path/to/base/config/security
|
|
archive_name_format: 'borgmatic-{hostname}-{now}'
|
|
#+END_SRC
|
|
|
|
In this section, we tell borg a little big of information about our repository.
|
|
What are the credentials, where it can find them, etc.
|
|
|
|
The easy way is to go with a =passphrase=, but I recommend using an =encryption_passcommand= instead.
|
|
I also use =zstd= for encryption instead of =lz4=, you better do your research before you change the default.
|
|
I also recommend, just as they do, the use of a security directory as well.
|
|
|
|
**** Retention
|
|
We can configure a retention for our backups, if we like.
|
|
|
|
#+BEGIN_SRC yaml
|
|
retention:
|
|
keep_hourly: 7
|
|
keep_daily: 7
|
|
keep_weekly: 4
|
|
keep_monthly: 6
|
|
keep_yearly: 2
|
|
|
|
prefix: "borgmatic-"
|
|
#+END_SRC
|
|
|
|
The part of what to keep from /hourly/ to /daily/ is self explanatory.
|
|
I would like to point out the =prefix= part as it is important.
|
|
This is the /prefix/ that *borgmatic* uses to consider backups for *pruning*.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Watch out for the retention =prefix=
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
**** Consistency
|
|
After the updates, we'd like to check our backups.
|
|
|
|
#+BEGIN_SRC yaml
|
|
consistency:
|
|
checks:
|
|
- repository
|
|
- archives
|
|
|
|
check_last: 3
|
|
|
|
prefix: "borgmatic-"
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Watch out, again, for the consistency =prefix=
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
**** Hooks
|
|
Finally, hooks.
|
|
|
|
I'm going to talk about hooks a bit. Hooks can be used to backup *MySQL*, *PostgreSQL* or *MariaDB*.
|
|
They can also be hooks for =on_error=, =before_backup=, =after_backup=, =before_everything= and =after_everything=.
|
|
You can also hook to third party services which you can check on their webpage.
|
|
|
|
I deployed my own, so I configured my own.
|
|
|
|
**** Borgmatic Configuration
|
|
Let's put everything together now.
|
|
|
|
#+BEGIN_SRC yaml
|
|
location:
|
|
source_directories:
|
|
- /home/
|
|
|
|
repositories:
|
|
- user@backupserver:sourcehostname.borg
|
|
|
|
one_file_system: true
|
|
|
|
exclude_patterns:
|
|
- /home/*/.cache
|
|
- '*.pyc'
|
|
|
|
storage:
|
|
# Recommended
|
|
# encryption_passcommand: secret-tool lookup borg-repository repo-name
|
|
|
|
encryption_passphrase: "ReallyStrongPassphrase"
|
|
compression: zstd,15
|
|
ssh_command: ssh -i /path/to/private/key
|
|
borg_security_directory: /path/to/base/config/security
|
|
archive_name_format: 'borgmatic-{hostname}-{now}'
|
|
|
|
retention:
|
|
keep_hourly: 7
|
|
keep_daily: 7
|
|
keep_weekly: 4
|
|
keep_monthly: 6
|
|
keep_yearly: 2
|
|
|
|
prefix: "borgmatic-"
|
|
|
|
consistency:
|
|
checks:
|
|
- repository
|
|
- archives
|
|
|
|
check_last: 3
|
|
|
|
prefix: "borgmatic-"
|
|
#+END_SRC
|
|
|
|
Now that we have everything together, let's save it in =/etc/borgmatic.d/home.yaml=.
|
|
|
|
**** Usage
|
|
If you have *borg* and *borgmatic* already installed on your system and the *borgmatic* configuration file in place, you can test it out.
|
|
|
|
You can create the repository.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# borgmatic init -v 2
|
|
#+END_EXAMPLE
|
|
|
|
You can list the backups for the repository.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# borgmatic list --last 5
|
|
borgmatic-home-2020-01-30T22:01:30 Thu, 2020-01-30 22:01:42 [0000000000000000000000000000000000000000000000000000000000000000]
|
|
borgmatic-home-2020-01-31T22:02:12 Fri, 2020-01-31 22:02:24 [0000000000000000000000000000000000000000000000000000000000000000]
|
|
borgmatic-home-2020-02-01T22:01:34 Sat, 2020-02-01 22:01:45 [0000000000000000000000000000000000000000000000000000000000000000]
|
|
borgmatic-home-2020-02-02T16:01:22 Sun, 2020-02-02 16:01:32 [0000000000000000000000000000000000000000000000000000000000000000]
|
|
borgmatic-home-2020-02-02T18:01:36 Sun, 2020-02-02 18:01:47 [0000000000000000000000000000000000000000000000000000000000000000]
|
|
#+END_EXAMPLE
|
|
|
|
You could run a check.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# borgmatic check -v 1
|
|
/etc/borgmatic.d/home.yaml: Pinging Healthchecks start
|
|
/borg/home: Running consistency checks
|
|
Remote: Starting repository check
|
|
Remote: Starting repository index check
|
|
Remote: Completed repository check, no problems found.
|
|
Starting archive consistency check...
|
|
Analyzing archive borgmatic-home-2020-02-01T22:01:34 (1/3)
|
|
Analyzing archive borgmatic-home-2020-02-02T16:01:22 (2/3)
|
|
Analyzing archive borgmatic-home-2020-02-02T18:01:36 (3/3)
|
|
Orphaned objects check skipped (needs all archives checked).
|
|
Archive consistency check complete, no problems found.
|
|
|
|
summary:
|
|
/etc/borgmatic.d/home.yaml: Successfully ran configuration file
|
|
#+END_EXAMPLE
|
|
|
|
But most of all, if you simply run =borgmatic= without any parameters, it will run through the whole configuration and apply all the steps.
|
|
|
|
At this point, you can simply add the =borgmatic= command in a *cron* to run on an interval.
|
|
The other options would be to configure a =systemd= *timer* and *service* to run this on an interval.
|
|
The latter is usually provided to you if you used your *package manager* to install *borgmatic*.
|
|
|
|
**** Conclusion
|
|
If you've checked *borg* and found it too much work to script, give *borgmatic* a try.
|
|
I've been using borgmatic for few weeks now with no issues at all.
|
|
I recently hooked it to a monitoring system so I will have a better view on when it runs, how much time each run takes.
|
|
Also, if any of my backups fail I get notified by email. I hope you enjoy *borg* and *borgmatic* as much as I am.
|
|
|
|
*** DONE Dotfiles with /Chezmoi/ :dotfiles:chezmoi:encryption:templates:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2020-10-05
|
|
:EXPORT_DATE: 2020-10-05
|
|
:EXPORT_FILE_NAME: dotfiles-with-chezmoi
|
|
:CUSTOM_ID: dotfiles-with-chezmoi
|
|
:END:
|
|
|
|
A few months ago, I went on a search for a solution for my /dotfiles/.
|
|
|
|
I tried projects likes [[https://www.gnu.org/software/stow/][GNU Stow]], [[https://github.com/anishathalye/dotbot][dotbot]] and a [[https://www.atlassian.com/git/tutorials/dotfiles][bare /git/ repository]].
|
|
Each one of these solutions has its advantages and its advantages, but I found mine in [[https://www.chezmoi.io/][/Chezmoi/]].
|
|
/Chezmoi/ ? That's *French* right ? How is learning *French* going to help me ?
|
|
#+hugo: more
|
|
|
|
**** Introduction
|
|
|
|
On a /*nix/ system, whether /Linux/, /BSD/ or even /Mac OS/ now, the applications one uses have their configuration saved in the user's home directory. These files are called /configuration/ files. Usually, these configuration files start with a =.= which on these systems designate hidden files (they do not show up with a simple =ls=). Due their names, these /configuration/ files are also referred to as /dotfiles/.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
I will be using /dotfiles/ and /configuration files/ interchangeably in this article, and they can be thought as such.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
One example of such files is the =.bashrc= file found in the user's /home directory/. It allows the user to configure /bash/ and change some behaviours.
|
|
|
|
Now that we understand what /dotfiles/ are, let's talk a little bit about the /previously mentioned/ solutions.
|
|
They deserve mentioning, especially if you're looking for such solution.
|
|
|
|
***** GNU Stow
|
|
/GNU Stow/ leverages the power of /symlinks/ to keep your /configuration/ in a *centralized* location.
|
|
Wherever your repository lives, /GNU Stow/ will mimic the internal structure of said repository in your *home directory* by /smartly symlinking/ everything.
|
|
|
|
I said /smartly/ because it tries to *minimize* the amount of /symlinks/ created by /symlinking/ to common root directories if possible.
|
|
|
|
By having all your configuration files under one directory structure, it is easier to push it to any public repository and share it with others.
|
|
|
|
The downsize is, you end-up with a lot of /symlinks/. It is also worth mentioning that not all applications behave well when their /configuration directories/ are /symlinked/. Otherwise, /GNU Stow/ is a great project.
|
|
|
|
***** Dotbot
|
|
/Dotbot/ is a /Python/ project that *aims* at automating your /dotfiles/. It gives you great control over what and how to manage your /dotfiles/.
|
|
|
|
Having it written in /Python/ means it is very easy to install; =pip=. It also means that it /should/ be easy to migrate it to different systems.
|
|
/Dotbot/ has a lot going for it. If the idea of having control over every aspect of your /dotfiles/, including the /possibility/ of the setup of the environment along with it, then /dotbot/ is for you.
|
|
|
|
Well, it's not for *me*.
|
|
|
|
***** Bare /Git/ Repository
|
|
|
|
This is arguably the /most elegant/ solution of them all.
|
|
|
|
The nice thing about this solution is its /simplicity/ and /cleanliness/. It is /essentially/ creating a /bare git/ repository /somewhere/ in your /home directory/ specifying the /home directory/ itself to be the /working directory/.
|
|
|
|
If you are wondering where one would use a /bare git/ repository in real life other than this use case.
|
|
Well, you have no other place to turn than any /git server/. On the server, /Gitea/ for example, your repository is only a /bare/ repository. One has to clone it to get the /working directory/ along with it.
|
|
|
|
Anyway, back to our topic. This is a great solution if you don't have to worry about things you would like to hide.
|
|
|
|
By hide, I mean things like /credentials/, /keys/ or /passwords/ which *never* belong in a /repository/.
|
|
You will need to find solutions for these types of files. I was looking for something /less involving/ and /more involved/.
|
|
|
|
**** /Chezmoi/ to the rescue ?
|
|
|
|
Isn't that what they *all* say ?
|
|
|
|
I like how the creator(s) defines [[https://www.chezmoi.io/][/Chezmoi/]]
|
|
|
|
#+BEGIN_QUOTE
|
|
Manage your dotfiles across multiple machines, securely.
|
|
#+END_QUOTE
|
|
|
|
Pretty basic, straight to the point. Unfortunately, it's a little bit harder to grasp the concept of how it works.
|
|
/Chezmoi/ basically /generates/ the /dotfiles/ from the /local repository/. These /dotfiles/ are saved in different forms in the /repository/ but they *always* generate the same output; the /dotfiles/. Think of /Chezmoi/ as a /dotfiles/ templating engine, at its basic form it saves your /dotfiles/ as is and /deploys/ them in *any* machine.
|
|
|
|
**** Working with /Chezmoi/
|
|
|
|
I think we should take a /quick/ look at /Chezmoi/ to see how it works.
|
|
/Chezmoi/ is written /Golang/ making it /fairly/ easy to [[https://www.chezmoi.io/docs/install/][install]] so I will forgo that boring part.
|
|
|
|
***** First run
|
|
|
|
To start using /Chezmoi/, one has to *initialize* a new /Chezmoi repository/.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi init
|
|
#+END_SRC
|
|
|
|
This will create a *new* /git repository/ in =~/.local/share/chezmoi=. This is now the *source state*, where /Chezmoi/ will get your /dotfiles/.
|
|
|
|
***** Plain /dotfiles/ management with /Chezmoi/
|
|
|
|
Now that we have a /Chezmoi/ repository. We can start to /populate/ it with /dotfiles/.
|
|
|
|
Let's assume that we would like to start managing one of our /dotfiles/ with /Chezmoi/.
|
|
I'm going with an /imaginary application/'s configuration directory.
|
|
This directory will hold different files with /versatile/ content types.
|
|
This is going to showcase some of /Chezmoi/'s capabilities.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
This is how I use /Chezmoi/. If you have a better way to do things, I'd like to hear about it!
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
****** Adding a /dotfile/
|
|
|
|
This *DS9* application has its directory configuration in =~/.ds9/= where we find the =config=.
|
|
|
|
The configuration looks like any /generic/ /ini/ configuration.
|
|
|
|
#+BEGIN_SRC ini :tangle ~/.ds9/config
|
|
[character/sisko]
|
|
Name = Benjamin
|
|
Rank = Captain
|
|
Credentials = sisko-creds.cred
|
|
Mastodon = sisko-api.mastodon
|
|
#+END_SRC
|
|
/Nothing/ special about this file, let's add it to /Chezmoi/
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi add ~/.ds9/config
|
|
#+END_SRC
|
|
|
|
****** Listing /dotfiles/
|
|
|
|
And /nothing/ happened... Hmm...
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi managed
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXAMPLE
|
|
/home/user/.ds9
|
|
/home/user/.ds9/config
|
|
#+END_EXAMPLE
|
|
|
|
Okay, it seems that it is being managed.
|
|
|
|
****** Diffing /dotfiles/
|
|
|
|
We can /test/ it out by doing something like this.
|
|
|
|
#+BEGIN_SRC bash
|
|
mv ~/.ds9/config ~/.ds9/config.old
|
|
chezmoi diff
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXAMPLE
|
|
install -m 644 /dev/null /home/user/.ds9/config
|
|
--- a/home/user/.ds9/config
|
|
+++ b/home/user/.ds9/config
|
|
@@ -0,0 +1,5 @@
|
|
+[character/sisko]
|
|
+Name = Benjamin
|
|
+Rank = Captain
|
|
+Credentials = sisko-creds.cred
|
|
+Mastodon = sisko-api.mastodon
|
|
#+END_EXAMPLE
|
|
|
|
Alright, everything looks as it should be.
|
|
|
|
****** Apply /dotfiles/
|
|
|
|
But that's only a /diff/, how do I make /Chezmoi/ apply the changes because my /dotfile/ is still =config.old=.
|
|
|
|
Okay, we can actually get rid of the =config.old= file and make /Chezmoi/ regenerate the configuration.
|
|
|
|
#+BEGIN_SRC bash
|
|
rm ~/.ds9/config ~/.ds9/config.old
|
|
chezmoi -v apply
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
I like to use the =-v= flag to check what is *actually* being applied.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+BEGIN_EXAMPLE
|
|
install -m 644 /dev/null /home/user/.ds9/config
|
|
--- a/home/user/.ds9/config
|
|
+++ b/home/user/.ds9/config
|
|
@@ -0,0 +1,5 @@
|
|
+[character/sisko]
|
|
+Name = Benjamin
|
|
+Rank = Captain
|
|
+Credentials = sisko-creds.cred
|
|
+Mastodon = sisko-api.mastodon
|
|
#+END_EXAMPLE
|
|
|
|
And we get the same output as the =diff=. Nice!
|
|
The configuration file was also recreated, that's awesome.
|
|
|
|
****** Editing /dotfiles/
|
|
|
|
If you've followed so far, you might have wondered... If I edit =~/.ds9/config=, then /Chezmoi/ is going to *override* it!
|
|
*YES*, *yes* it will.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Always use /Chezmoi/ to edit your managed /dotfiles/. Do *NOT* edit them directly.
|
|
*ALWAYS* use =chezmoi diff= before every /applying/.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
|
|
To /edit/ your managed /dotfile/, simply tell /Chezmoi/ about it.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi edit ~/.ds9/config
|
|
#+END_SRC
|
|
/Chezmoi/ will use your =$EDITOR= to open the file for you to edit. Once saved, it's saved in the /repository database/.
|
|
|
|
Be aware, at this point the changes are not reflected in your /home/ directory, *only* in the /Chezmoi source state/. Make sure you *diff* and then *apply* to make the changes in your /home/.
|
|
|
|
***** /Chezmoi/ repository management
|
|
|
|
As mentioned previously, the repository is found in =~/.local/share/chezmoi=.
|
|
I *always* forget where it is, luckily /Chezmoi/ has a solution for that.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi cd
|
|
#+END_SRC
|
|
|
|
Now, we are in the repository. We can work with it as a /regultar/ /git/ repository.
|
|
When you're done, don't forget to =exit=.
|
|
|
|
***** Other features
|
|
|
|
It is worth mentioning at this point that /Chezmoi/ offers a few more integrations.
|
|
|
|
****** Templating
|
|
|
|
Due to the fact that /Chezmoi/ is written in /Golang/, it can leverage the power of the /Golang [[https://www.chezmoi.io/docs/how-to/#use-templates-to-manage-files-that-vary-from-machine-to-machine][templating]]/ system.
|
|
One can replace /repeatable/ values like *email* or *name* with a template like ={{ .email }}= or ={{ .name }}=.
|
|
|
|
This will result in a replacement of these /templated variables/ with their real values in the resulting /dotfile/.
|
|
This is another reason why you should *always* edit your managed /dotfiles/ through /Chezmoi/.
|
|
|
|
Our /previous/ example would look a bit different.
|
|
|
|
#+BEGIN_SRC ini :tangle ~/.ds9/config
|
|
[character/sisko]
|
|
Name = {{ .sisko.name }}
|
|
Rank = {{ .sisko.rank }}
|
|
Credentials = sisko-creds.cred
|
|
Mastodon = sisko-api.mastodon
|
|
#+END_SRC
|
|
|
|
And we would add it a bit differently now.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi add --template ~/.ds9/config
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Follow the [[https://www.chezmoi.io/docs/how-to/#use-templates-to-manage-files-that-vary-from-machine-to-machine][documentation]] to /configure/ the *values*.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
****** Password manager integration
|
|
|
|
Once you have the power of /templating/ on your side, you can always take it one step further.
|
|
/Chezmoi/ has integration with a big list of [[https://www.chezmoi.io/docs/how-to/#keep-data-private][password managers]]. These can be used directly into the /configuration files/.
|
|
|
|
In our /hypothetical/ example, we can think of the /credentials/ file (=~/.ds9/sisko-creds.cred=).
|
|
|
|
#+BEGIN_SRC init :tangle ~/.ds9/sisko-creds.cred
|
|
Name = {{ (keepassxc "sisko.ds9").Name }}
|
|
Rank = {{ (keepassxc "sisko.ds9").Rank }}
|
|
Access_Code = {{ (keepassxc "sisko.ds9").AccessCode }}
|
|
#+END_SRC
|
|
|
|
Do not /forget/ that this is also using the /templating/ engine. So you need to add as a /template/.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi add --template ~/.ds9/sisko-creds.cred
|
|
#+END_SRC
|
|
|
|
****** File encryption
|
|
|
|
Wait, what ! You almost slipped away right there old fellow.
|
|
|
|
We have our /Mastodon/ *API* key in the =sisko-api.mastodon= file. The whole file cannot be pushed to a repository.
|
|
It turns out that /Chezmoi/ can use /gpg/ to [[https://www.chezmoi.io/docs/how-to/#use-gpg-to-keep-your-secrets][encrypt your files]] making it possible for you to push them.
|
|
|
|
To add a file encrypted to the /Chezmoi/ repository, use the following command.
|
|
|
|
#+BEGIN_SRC bash
|
|
chezmoi add --encrypt ~/.ds9/sisko-api.mastodon
|
|
#+END_SRC
|
|
|
|
****** Misc
|
|
|
|
There is a list of other features that /Chezmoi/ supports that I did not mention.
|
|
I did not use all the /features/ offered yet. You should check the [[https://www.chezmoi.io/][website]] for the full documentation.
|
|
|
|
**** Conclusion
|
|
|
|
I am fully migrated into /Chezmoi/ so far. I have used all the features above, and it has worked flawlessly so far.
|
|
|
|
I like the idea that it offers *all* the features I need while at the same time staying out of the way.
|
|
I find myself, often, editing the /dotfiles/ in my /home/ directory as a /dev/ version. Once I get to a configuration I like, I add it to /Chezmoi/. If I ever mess up badly, I ask /Chezmoi/ to override my changes.
|
|
|
|
I understand it adds a little bit of /overhead/ with the use of =chezmoi= commands, which I aliased to =cm=. But the end result is a /home/ directory which seems untouched by any tools (no symlinks, no copies, etc...) making it easier to migrate /out/ of /Chezmoi/ as a solution and into another one if I ever choose in the future.
|
|
** Configuration Management :@configuration_management:
|
|
*** DONE Ansible testing with Molecule :ansible:molecule:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2019-06-21
|
|
:EXPORT_DATE: 2019-01-11
|
|
:EXPORT_FILE_NAME: ansible-testing-with-molecule
|
|
:CUSTOM_ID: ansible-testing-with-molecule
|
|
:END:
|
|
|
|
When I first started using [[https://www.ansible.com/][ansible]], I did not know about [[https://molecule.readthedocs.io/en/latest/][molecule]]. It was a bit daunting to start a /role/ from scratch and trying to develop it without having the ability to test it. Then a co-worker of mine told me about molecule and everything changed.
|
|
#+hugo: more
|
|
|
|
I do not have any of the tools I need installed on this machine, so I will go through, step by step, how I set up ansible and molecule on any new machine I come across for writing ansible roles.
|
|
|
|
**** Requirements
|
|
What we are trying to achieve in this post, is a working ansible role that can be tested inside a docker container. To be able to achieve that, we need to install docker on the system. Follow the instructions on [[https://docs.docker.com/install/][installing docker]] found on the docker website.
|
|
|
|
**** Good Practices
|
|
First thing's first. Let's start by making sure that we have python installed properly on the system.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ python --version
|
|
Python 3.7.1
|
|
#+END_EXAMPLE
|
|
|
|
Because in this case I have /python3/ installed, I can create a /virtualenv/ easier without the use of external tools.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# Create the directory to work with
|
|
$ mkdir -p sandbox/test-roles
|
|
# Navigate to the directory
|
|
$ cd sandbox/test-roles/
|
|
# Create the virtualenv
|
|
~/sandbox/test-roles $ python -m venv .ansible-venv
|
|
# Activate the virtualenv
|
|
~/sandbox/test-roles $ source .ansible-venv/bin/activate
|
|
# Check that your virtualenv activated properly
|
|
(.ansible-venv) ~/sandbox/test-roles $ which python
|
|
/home/elijah/sandbox/test-roles/.ansible-venv/bin/python
|
|
#+END_EXAMPLE
|
|
|
|
At this point, we can install the required dependencies.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ pip install ansible molecule docker
|
|
Collecting ansible
|
|
Downloading https://files.pythonhosted.org/packages/56/fb/b661ae256c5e4a5c42859860f59f9a1a0b82fbc481306b30e3c5159d519d/ansible-2.7.5.tar.gz (11.8MB)
|
|
100% |████████████████████████████████| 11.8MB 3.8MB/s
|
|
Collecting molecule
|
|
Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB)
|
|
100% |████████████████████████████████| 184kB 2.2MB/s
|
|
|
|
...
|
|
|
|
Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow
|
|
Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker
|
|
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
|
|
#+END_EXAMPLE
|
|
|
|
**** Creating your first ansible role
|
|
Once all the steps above are complete, we can start by creating our first ansible role.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ molecule init role -r example-role
|
|
--> Initializing new role example-role...
|
|
Initialized role in /home/elijah/sandbox/test-roles/example-role successfully.
|
|
|
|
$ tree example-role/
|
|
example-role/
|
|
├── defaults
|
|
│ └── main.yml
|
|
├── handlers
|
|
│ └── main.yml
|
|
├── meta
|
|
│ └── main.yml
|
|
├── molecule
|
|
│ └── default
|
|
│ ├── Dockerfile.j2
|
|
│ ├── INSTALL.rst
|
|
│ ├── molecule.yml
|
|
│ ├── playbook.yml
|
|
│ └── tests
|
|
│ ├── __pycache__
|
|
│ │ └── test_default.cpython-37.pyc
|
|
│ └── test_default.py
|
|
├── README.md
|
|
├── tasks
|
|
│ └── main.yml
|
|
└── vars
|
|
└── main.yml
|
|
|
|
9 directories, 12 files
|
|
#+END_EXAMPLE
|
|
|
|
You can find what each directory is for and how ansible works by visiting [[https://docs.ansible.com][docs.ansible.com]].
|
|
|
|
***** =meta/main.yml=
|
|
The meta file needs to modified and filled with information about the role. This is not a required file to modify if you are keeping this for yourself, for example. But it is a good idea to have as much information as possible if this is going to be released. In my case, I don't need any fanciness as this is just sample code.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
galaxy_info:
|
|
author: Elia el Lazkani
|
|
description: This is an example ansible role to showcase molecule at work
|
|
license: license (BDS-2)
|
|
min_ansible_version: 2.7
|
|
galaxy_tags: []
|
|
dependencies: []
|
|
#+END_SRC
|
|
|
|
***** =tasks/main.yml=
|
|
This is where the magic is set in motion. Tasks are the smallest entities in a role that do small and idempotent actions. Let's write a few simple tasks to create a user and install a service.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
# Create the user example
|
|
- name: Create 'example' user
|
|
user:
|
|
name: example
|
|
comment: Example user
|
|
shell: /bin/bash
|
|
state: present
|
|
create_home: yes
|
|
home: /home/example
|
|
|
|
# Install nginx
|
|
- name: Install nginx
|
|
apt:
|
|
name: nginx
|
|
state: present
|
|
update_cache: yes
|
|
notify: Restart nginx
|
|
#+END_SRC
|
|
|
|
***** =handlers/main.yml=
|
|
If you noticed, we are notifying a handler to be called after installing /nginx/. All handlers notified will run after all the tasks complete and each handler will only run once. This is a good way to make sure that you don't restart /nginx/ multiple times if you call the handler more than once.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
# Handler to restart nginx
|
|
- name: Restart nginx
|
|
service:
|
|
name: nginx
|
|
state: restarted
|
|
#+END_SRC
|
|
|
|
***** =molecule/default/molecule.yml=
|
|
It's time to configure molecule to do what we need. We need to start an ubuntu docker container, so we need to specify that in the molecule YAML file. All we need to do is change the image line to specify that we want an =ubuntu:bionic= image.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
dependency:
|
|
name: galaxy
|
|
driver:
|
|
name: docker
|
|
lint:
|
|
name: yamllint
|
|
platforms:
|
|
- name: instance
|
|
image: ubuntu:bionic
|
|
provisioner:
|
|
name: ansible
|
|
lint:
|
|
name: ansible-lint
|
|
scenario:
|
|
name: default
|
|
verifier:
|
|
name: testinfra
|
|
lint:
|
|
name: flake8
|
|
#+END_SRC
|
|
|
|
***** =molecule/default/playbook.yml=
|
|
This is the playbook that molecule will run. Make sure that you have all the steps that you need here. I will keep this as is.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
- name: Converge
|
|
hosts: all
|
|
roles:
|
|
- role: example-role
|
|
#+END_SRC
|
|
|
|
**** First Role Pass
|
|
This is time to test our role and see what's going on.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
(.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge
|
|
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
|
Validation completed successfully.
|
|
--> Test matrix
|
|
|
|
└── default
|
|
├── dependency
|
|
├── create
|
|
├── prepare
|
|
└── converge
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'dependency'
|
|
Skipping, missing the requirements file.
|
|
--> Scenario: 'default'
|
|
--> Action: 'create'
|
|
|
|
PLAY [Create] ******************************************************************
|
|
|
|
TASK [Log into a Docker registry] **********************************************
|
|
skipping: [localhost] => (item=None)
|
|
|
|
TASK [Create Dockerfiles from image names] *************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Discover local Docker images] ********************************************
|
|
ok: [localhost] => (item=None)
|
|
ok: [localhost]
|
|
|
|
TASK [Build an Ansible compatible image] ***************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Create docker network(s)] ************************************************
|
|
|
|
TASK [Create molecule instance(s)] *********************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Wait for instance(s) creation to complete] *******************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
PLAY RECAP *********************************************************************
|
|
localhost : ok=5 changed=4 unreachable=0 failed=0
|
|
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'prepare'
|
|
Skipping, prepare playbook not configured.
|
|
--> Scenario: 'default'
|
|
--> Action: 'converge'
|
|
|
|
PLAY [Converge] ****************************************************************
|
|
|
|
TASK [Gathering Facts] *********************************************************
|
|
ok: [instance]
|
|
|
|
TASK [example-role : Create 'example' user] ************************************
|
|
changed: [instance]
|
|
|
|
TASK [example-role : Install nginx] ********************************************
|
|
changed: [instance]
|
|
|
|
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
|
changed: [instance]
|
|
|
|
PLAY RECAP *********************************************************************
|
|
instance : ok=4 changed=3 unreachable=0 failed=0
|
|
#+END_EXAMPLE
|
|
|
|
It looks like the *converge* step succeeded.
|
|
|
|
**** Writing Tests
|
|
It is always a good practice to write unittests when you're writing code. Ansible roles should not be an exception. Molecule offers a way to run tests, which you can think of as unittest, to make sure that what the role gives you is what you were expecting. This helps future development of the role and keeps you from falling in previously solved traps.
|
|
|
|
***** =molecule/default/tests/test_default.py=
|
|
Molecule leverages the [[https://testinfra.readthedocs.io/en/latest/][testinfra]] project to run its tests. You can use other tools if you so wish, and there are many. In this example we will be using /testinfra/.
|
|
|
|
#+BEGIN_SRC python
|
|
import os
|
|
|
|
import testinfra.utils.ansible_runner
|
|
|
|
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
|
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
|
|
|
|
|
def test_hosts_file(host):
|
|
f = host.file('/etc/hosts')
|
|
|
|
assert f.exists
|
|
assert f.user == 'root'
|
|
assert f.group == 'root'
|
|
|
|
|
|
def test_user_created(host):
|
|
user = host.user("example")
|
|
assert user.name == "example"
|
|
assert user.home == "/home/example"
|
|
|
|
|
|
def test_user_home_exists(host):
|
|
user_home = host.file("/home/example")
|
|
assert user_home.exists
|
|
assert user_home.is_directory
|
|
|
|
|
|
def test_nginx_is_installed(host):
|
|
nginx = host.package("nginx")
|
|
assert nginx.is_installed
|
|
|
|
|
|
def test_nginx_running_and_enabled(host):
|
|
nginx = host.service("nginx")
|
|
assert nginx.is_running
|
|
#+END_SRC
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
Uncomment =truthy: disable= in =.yamllint= found at the base of the role.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+BEGIN_EXAMPLE
|
|
(.ansible_venv) ~/sandbox/test-roles/example-role $ molecule test
|
|
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
|
Validation completed successfully.
|
|
--> Test matrix
|
|
|
|
└── default
|
|
├── lint
|
|
├── destroy
|
|
├── dependency
|
|
├── syntax
|
|
├── create
|
|
├── prepare
|
|
├── converge
|
|
├── idempotence
|
|
├── side_effect
|
|
├── verify
|
|
└── destroy
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'lint'
|
|
--> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/...
|
|
Lint completed successfully.
|
|
--> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
|
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
|
|
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
|
|
Lint completed successfully.
|
|
--> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml...
|
|
Lint completed successfully.
|
|
--> Scenario: 'default'
|
|
--> Action: 'destroy'
|
|
|
|
PLAY [Destroy] *****************************************************************
|
|
|
|
TASK [Destroy molecule instance(s)] ********************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Wait for instance(s) deletion to complete] *******************************
|
|
ok: [localhost] => (item=None)
|
|
ok: [localhost]
|
|
|
|
TASK [Delete docker network(s)] ************************************************
|
|
|
|
PLAY RECAP *********************************************************************
|
|
localhost : ok=2 changed=1 unreachable=0 failed=0
|
|
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'dependency'
|
|
Skipping, missing the requirements file.
|
|
--> Scenario: 'default'
|
|
--> Action: 'syntax'
|
|
|
|
playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'create'
|
|
|
|
PLAY [Create] ******************************************************************
|
|
|
|
TASK [Log into a Docker registry] **********************************************
|
|
skipping: [localhost] => (item=None)
|
|
|
|
TASK [Create Dockerfiles from image names] *************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Discover local Docker images] ********************************************
|
|
ok: [localhost] => (item=None)
|
|
ok: [localhost]
|
|
|
|
TASK [Build an Ansible compatible image] ***************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Create docker network(s)] ************************************************
|
|
|
|
TASK [Create molecule instance(s)] *********************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Wait for instance(s) creation to complete] *******************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
PLAY RECAP *********************************************************************
|
|
localhost : ok=5 changed=4 unreachable=0 failed=0
|
|
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'prepare'
|
|
Skipping, prepare playbook not configured.
|
|
--> Scenario: 'default'
|
|
--> Action: 'converge'
|
|
|
|
PLAY [Converge] ****************************************************************
|
|
|
|
TASK [Gathering Facts] *********************************************************
|
|
ok: [instance]
|
|
|
|
TASK [example-role : Create 'example' user] ************************************
|
|
changed: [instance]
|
|
|
|
TASK [example-role : Install nginx] ********************************************
|
|
changed: [instance]
|
|
|
|
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
|
changed: [instance]
|
|
|
|
PLAY RECAP *********************************************************************
|
|
instance : ok=4 changed=3 unreachable=0 failed=0
|
|
|
|
|
|
--> Scenario: 'default'
|
|
--> Action: 'idempotence'
|
|
Idempotence completed successfully.
|
|
--> Scenario: 'default'
|
|
--> Action: 'side_effect'
|
|
Skipping, side effect playbook not configured.
|
|
--> Scenario: 'default'
|
|
--> Action: 'verify'
|
|
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
|
============================= test session starts ==============================
|
|
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
|
|
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
|
|
plugins: testinfra-1.16.0
|
|
collected 5 items
|
|
|
|
tests/test_default.py ..... [100%]
|
|
|
|
=============================== warnings summary ===============================
|
|
|
|
...
|
|
|
|
==================== 5 passed, 7 warnings in 27.37 seconds =====================
|
|
Verifier completed successfully.
|
|
--> Scenario: 'default'
|
|
--> Action: 'destroy'
|
|
|
|
PLAY [Destroy] *****************************************************************
|
|
|
|
TASK [Destroy molecule instance(s)] ********************************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Wait for instance(s) deletion to complete] *******************************
|
|
changed: [localhost] => (item=None)
|
|
changed: [localhost]
|
|
|
|
TASK [Delete docker network(s)] ************************************************
|
|
|
|
PLAY RECAP *********************************************************************
|
|
localhost : ok=2 changed=2 unreachable=0 failed=0
|
|
#+END_EXAMPLE
|
|
|
|
I have a few warning messages (that's likely because I am using /python 3.7/ and some of the libraries still don't fully support the new standards released with it) but all my tests passed.
|
|
|
|
**** Conclusion
|
|
Molecule is a great tool to test ansible roles quickly and while developing
|
|
them. It also comes bundled with a bunch of other features from different
|
|
projects that will test all aspects of your ansible code. I suggest you start
|
|
using it when writing new ansible roles.
|
|
** Container :@container:
|
|
*** DONE Linux Containers :linux:kernel:docker:podman:dockerfile:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2021-02-27
|
|
:EXPORT_DATE: 2021-02-27
|
|
:EXPORT_FILE_NAME: linux-containers
|
|
:CUSTOM_ID: linux-containers
|
|
:END:
|
|
|
|
Our story dates /all the way/ back to 2006, believe it or not. The first steps were taken towards what we know today as *containers*.
|
|
We'll discuss their history, how to build them and how to use them. Stick around! you might enjoy the ride.
|
|
#+hugo: more
|
|
|
|
**** History
|
|
|
|
***** 2006-2007 - The /[[https://lkml.org/lkml/2006/10/20/251][Generic Process Containers]]/ lands in Linux
|
|
|
|
This was renamed thereafter to /[[https://en.wikipedia.org/wiki/Cgroups][Control Groups]]/, popularily known as /cgroups/, and landed in /Linux/ version =2.6.24=.
|
|
/Cgroups/ are the first piece of the puzzle in /Linux Containers/. We will be talking about /cgroups/ in detail later.
|
|
|
|
***** 2008 - Namespaces
|
|
|
|
Even though /namespaces/ have been around since 2002, /Linux/ version =2.4.19=, they saw a [[https://www.redhat.com/en/blog/history-containers][rapid development]] beginning 2006 and into 2008.
|
|
/namespaces/ are the other piece of the puzzle in /Linux Containers/. We will talk about /namespaces/ in more details later.
|
|
|
|
***** 2008 - LXC
|
|
/LXC/ finally shows up!
|
|
/LXC/ is the first form of /containers/ on the /Linux/ kernel.
|
|
/LXC/ combined both /cgroups/ and /namespaces/ to provide isolated environments; containers.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
It is worth mentioning that /LXC/ runs a full /operating system/ containers from an image.
|
|
In other words, /LXC/ containers are meant to run more than one process.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
***** 2013 - Docker
|
|
/Docker/ offered a full set of tools for working with /containers/, making it easier than ever to work with them.
|
|
/Docker/ containers are designed to only run the application process.
|
|
Unlike /LXC/, the =PID= =1= of a Docker container is excepted to be the application running in the contanier.
|
|
We will be discussing this topic in more detail later.
|
|
|
|
**** Concepts
|
|
***** /cgroups/
|
|
****** What are cgroups ?
|
|
|
|
Let's find out ! Better yet, let's use the tools at our disposal to find out together...
|
|
|
|
Open a *terminal* and run the following command.
|
|
|
|
#+BEGIN_SRC bash
|
|
man 7 cgroups
|
|
#+END_SRC
|
|
|
|
This should open the ~man~ pages for =cgroups=.
|
|
|
|
#+BEGIN_QUOTE
|
|
Control groups, usually referred to as cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups whose usage of various types of resources can then be limited and monitored. The kernel's cgroup interface is provided through a pseudo-filesystem called cgroupfs. Grouping is implemented in the core cgroup kernel code, while resource tracking and limits are implemented in a set of per-resource-type subsystems (memory, CPU, and so on).
|
|
#+END_QUOTE
|
|
|
|
****** What does this all mean ?
|
|
This can all be simplified by explaining it in a different way.
|
|
Essentially, you can think of =cgroups= as a way for the /kernel/ to *limit* what you can *use*.
|
|
|
|
This gives us the ability to give a /container/ only *1* CPU out of the 4 available to the /kernel/.
|
|
Or maybe, limit the memory allowed to *512MB* to the container.
|
|
This way the container cannot overload the resources of the system in case they run a fork-bomb, for example.
|
|
|
|
But, =cgroups= do not limit what we can "/see/".
|
|
|
|
***** /namespaces/
|
|
|
|
****** /Namespaces/ to the rescue !
|
|
|
|
As we did before, let's check the ~man~ page for =namespaces=
|
|
|
|
#+BEGIN_SRC bash
|
|
man 7 namespaces
|
|
#+END_SRC
|
|
|
|
#+BEGIN_QUOTE
|
|
A namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource. Changes to the global resource are visible to other processes that are members of the namespace, but are invisible to other processes. One use of namespaces is to implement containers.
|
|
#+END_QUOTE
|
|
|
|
Wooow ! That's more mumbo jumbo ?!
|
|
|
|
****** Is it really simple ?
|
|
Let's simplify this one as well.
|
|
|
|
You can think of =namespaces= as a way for the /kernel/ to *limit* what we *see*.
|
|
|
|
There are multiple =namespaces=, like the =cgroup_namespaces= which /virtualizes/ the view of a process =cgroup=.
|
|
In other words, inside the =cgroup= the process with =PID= *1* is not =PID= on the *system*.
|
|
|
|
The =namespaces= manual page lists them, you check them out for more details. But I hope you get the gist of it !
|
|
|
|
***** Linux Containers
|
|
We are finally here! Let's talk /Linux Containers/.
|
|
|
|
The first topic we need to know about is *images*.
|
|
|
|
****** What are container images ?
|
|
|
|
We talked before that /Docker/ came in and offered tooling around /containers/.
|
|
|
|
One of those concepts which they used, in docker images, is *layers*.
|
|
|
|
First of all, an image is a /file-system/ representation of a container.
|
|
It is an on-disk, read-only, image. It sort of looks like your /Linux/ *filesystem*.
|
|
|
|
Then, layers on top to add functionality. You might ask, what are these layers. We will see them in action.
|
|
|
|
Let's look at my system.
|
|
|
|
#+BEGIN_SRC bash
|
|
lsb_release -a
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
LSB Version: n/a
|
|
Distributor ID: ManjaroLinux
|
|
Description: Manjaro Linux
|
|
Release: 20.2.1
|
|
Codename: Nibia
|
|
#+end_example
|
|
|
|
As you can see, I am running =Manjaro=. Keep that in mind.
|
|
|
|
Let's take a look at the kernel running on this machine.
|
|
|
|
#+BEGIN_SRC bash
|
|
uname -a
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
Linux manjaro 5.10.15-1-MANJARO #1 SMP PREEMPT Wed Feb 10 10:42:47 UTC 2021 x86_64 GNU/Linux
|
|
#+end_example
|
|
|
|
So, it's /kernel version/ =5.8.6=. Remember this one as well.
|
|
|
|
******* /neofetch/
|
|
|
|
I would like to /test/ a tool called =neofetch=. Why ?
|
|
|
|
- First reason, I am not that creative.
|
|
- Second, it's a nice tool, you'll see.
|
|
|
|
We can test =neofetch=
|
|
|
|
#+BEGIN_SRC bash
|
|
neofetch
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
fish: Unknown command: neofetch
|
|
#+end_example
|
|
|
|
Look at that! We don't have it installed...
|
|
Not a big deal. We can download an image and test it inside.
|
|
|
|
****** Pulling an image
|
|
|
|
Let's download a docker image. I am using =podman=, an open source project that allows us to *use* containers.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
You might want to run these commands with =sudo= privileges.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+BEGIN_SRC bash
|
|
podman pull ubuntu:20.04
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
f63181f19b2fe819156dcb068b3b5bc036820bec7014c5f77277cfa341d4cb5e
|
|
#+end_example
|
|
|
|
Let's pull an ~Ubuntu~ image.
|
|
|
|
As you can see, we have pulled an image from the repositories online. We can see further information about the image.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman images
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
docker.io/library/ubuntu 20.04 f63181f19b2f 5 weeks ago 75.3 MB
|
|
#+end_example
|
|
|
|
Much better, now we can see that we have an ~Ubuntu~ image downloaded from [[https://hub.docker.com][docker.io]].
|
|
|
|
****** What's a container then ?
|
|
|
|
A container is nothing more than an instance of an image. It is the running instance of an image.
|
|
|
|
Let's list our containers.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman ps -a
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
|
#+end_example
|
|
|
|
We have none. Let's start one.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman run -it ubuntu:20.04 uname -a
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
Linux 57453b419a43 5.10.15-1-MANJARO #1 SMP PREEMPT Wed Feb 10 10:42:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
|
|
#+end_example
|
|
|
|
It's running the same /kernel/ as our machine... Are we really inside a container ?
|
|
|
|
#+BEGIN_SRC bash
|
|
podman run -it ubuntu:20.04 hostname -f
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
6795b85eeb50
|
|
#+end_example
|
|
|
|
okay ?! And *our* /hostname/ is ?
|
|
|
|
#+BEGIN_SRC bash
|
|
hostname -f
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
manjaro
|
|
#+end_example
|
|
|
|
Hmm... They have different /hostnames/...
|
|
|
|
Let's see if it's *really* ~Ubuntu~.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman run -it ubuntu:20.04 bash -c 'apt-get update && apt-get install -y vim'
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
|
|
Get:2 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
|
|
Get:3 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
|
|
Get:4 http://security.ubuntu.com/ubuntu focal-security InRelease [109 kB]
|
|
Get:5 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]
|
|
Get:6 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]
|
|
Get:7 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]
|
|
...
|
|
Setting up libpython3.8:amd64 (3.8.5-1~20.04.2) ...
|
|
Setting up vim (2:8.1.2269-1ubuntu5) ...
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
|
|
...
|
|
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in auto mode
|
|
...
|
|
Processing triggers for libc-bin (2.31-0ubuntu9.1) ...
|
|
|
|
#+end_example
|
|
|
|
This should not work on my ~Manjaro~. =apt-get= is not a thing here.
|
|
Well, the output is a bit large so I truncated it a bit for readability but we seem to have installed vim successfully.
|
|
|
|
****** Building a container image
|
|
|
|
Now that we saw what an /image/ is and what a /container/ is. We can explore a bit inside a container to see it more clearly.
|
|
|
|
So, what can we do with containers? We can use the layering system and the /docker/ created tooling to create them and distribute them.
|
|
|
|
Let's go back to our =neofetch= example.
|
|
|
|
I want to get an ~Ubuntu~ image, then install =neofetch= on it.
|
|
|
|
First step, create a ~Dockerfile~ in your current directory. It should look like this.
|
|
|
|
#+BEGIN_SRC dockerfile :dir /tmp/docker/ :tangle /tmp/docker/Dockerfile.ubuntu :mkdirp yes
|
|
FROM ubuntu:20.04
|
|
|
|
RUN apt-get update && \
|
|
apt-get install -y neofetch
|
|
#+END_SRC
|
|
|
|
This file has two commands:
|
|
|
|
- =FROM= designates the base image to use.
|
|
This is the base image we will be building upon.
|
|
In our case, we chose ~Ubuntu:20.04~. You can find the images on multiple platforms.
|
|
To mention a few, we have /Dockerhub/, /Quay.io/ and a few others.
|
|
|
|
By default, this downloads from /Dockerhub/.
|
|
|
|
- =RUN= designates the commands to run. Pretty simple.
|
|
We are running a couple of commands that should be very familiar to any user familiar with /debian-based/ OS's.
|
|
|
|
Now that we have a /Dockerfile/, we can build the container.
|
|
|
|
#+BEGIN_SRC bash :dir /sudo::/tmp/docker/ :results output
|
|
podman build -t neofetch-ubuntu:20.04 -f Dockerfile.ubuntu .
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
STEP 1: FROM ubuntu:20.04
|
|
STEP 2: RUN apt-get update && apt-get install -y neofetch
|
|
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
|
|
Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [109 kB]
|
|
Get:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
|
|
...
|
|
Fetched 17.2 MB in 2s (7860 kB/s)
|
|
Reading package lists...
|
|
...
|
|
The following additional packages will be installed:
|
|
chafa dbus fontconfig-config fonts-dejavu-core fonts-droid-fallback
|
|
fonts-noto-mono fonts-urw-base35 ghostscript gsfonts imagemagick-6-common
|
|
krb5-locales libapparmor1 libavahi-client3 libavahi-common-data
|
|
libavahi-common3 libbsd0 libchafa0 libcups2 libdbus-1-3 libexpat1
|
|
libfftw3-double3 libfontconfig1 libfreetype6 libglib2.0-0 libglib2.0-data
|
|
libgomp1 libgs9 libgs9-common libgssapi-krb5-2 libicu66 libidn11 libijs-0.35
|
|
libjbig0 libjbig2dec0 libjpeg-turbo8 libjpeg8 libk5crypto3 libkeyutils1
|
|
libkrb5-3 libkrb5support0 liblcms2-2 liblqr-1-0 libltdl7
|
|
libmagickcore-6.q16-6 libmagickwand-6.q16-6 libopenjp2-7 libpaper-utils
|
|
libpaper1 libpng16-16 libssl1.1 libtiff5 libwebp6 libwebpmux3 libx11-6
|
|
libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxml2 poppler-data
|
|
shared-mime-info tzdata ucf xdg-user-dirs
|
|
Suggested packages:
|
|
default-dbus-session-bus | dbus-session-bus fonts-noto fonts-freefont-otf
|
|
| fonts-freefont-ttf fonts-texgyre ghostscript-x cups-common libfftw3-bin
|
|
libfftw3-dev krb5-doc krb5-user liblcms2-utils libmagickcore-6.q16-6-extra
|
|
poppler-utils fonts-japanese-mincho | fonts-ipafont-mincho
|
|
fonts-japanese-gothic | fonts-ipafont-gothic fonts-arphic-ukai
|
|
fonts-arphic-uming fonts-nanum
|
|
The following NEW packages will be installed:
|
|
chafa dbus fontconfig-config fonts-dejavu-core fonts-droid-fallback
|
|
fonts-noto-mono fonts-urw-base35 ghostscript gsfonts imagemagick-6-common
|
|
krb5-locales libapparmor1 libavahi-client3 libavahi-common-data
|
|
libavahi-common3 libbsd0 libchafa0 libcups2 libdbus-1-3 libexpat1
|
|
libfftw3-double3 libfontconfig1 libfreetype6 libglib2.0-0 libglib2.0-data
|
|
libgomp1 libgs9 libgs9-common libgssapi-krb5-2 libicu66 libidn11 libijs-0.35
|
|
libjbig0 libjbig2dec0 libjpeg-turbo8 libjpeg8 libk5crypto3 libkeyutils1
|
|
libkrb5-3 libkrb5support0 liblcms2-2 liblqr-1-0 libltdl7
|
|
libmagickcore-6.q16-6 libmagickwand-6.q16-6 libopenjp2-7 libpaper-utils
|
|
libpaper1 libpng16-16 libssl1.1 libtiff5 libwebp6 libwebpmux3 libx11-6
|
|
libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxml2 neofetch poppler-data
|
|
shared-mime-info tzdata ucf xdg-user-dirs
|
|
0 upgraded, 66 newly installed, 0 to remove and 6 not upgraded.
|
|
Need to get 36.2 MB of archives.
|
|
After this operation, 136 MB of additional disk space will be used.
|
|
Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 fonts-droid-fallback all 1:6.0.1r16-1.1 [1805 kB]
|
|
...
|
|
Get:66 http://archive.ubuntu.com/ubuntu focal/universe amd64 neofetch all 7.0.0-1 [77.5 kB]
|
|
Fetched 36.2 MB in 2s (22.1 MB/s)
|
|
...
|
|
Setting up ghostscript (9.50~dfsg-5ubuntu4.2) ...
|
|
Processing triggers for libc-bin (2.31-0ubuntu9.1) ...
|
|
STEP 3: COMMIT neofetch-ubuntu:20.04
|
|
--> 6486fa42efe
|
|
6486fa42efe5df4f761f4062d4986b7ec60b14d9d99d92d2aff2c26da61d13af
|
|
#+end_example
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
You might need =sudo= to run this command.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
As you can see, we just successfully built the container. We also got a =hash= as a name for it.
|
|
|
|
If you were careful, I used the =&&= command instead of using multiple =RUN=. You *can* use as many =RUN= commands ase you like.
|
|
But be careful, each one of those commands creates a *layer*. The /more/ layers you create, the /more/ time they require to *download*/*upload*.
|
|
It might not seem to be a lot of time to download a few extra layer on one system. But if we talk about /container orchestration/ platforms, it makes a big difference there.
|
|
|
|
Let's examine the build a bit more and see what we got.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
STEP 1: FROM ubuntu:20.04
|
|
STEP 2: RUN apt-get update && apt-get install -y neofetch
|
|
#+END_EXAMPLE
|
|
|
|
The first step was to /download/ the base image so we could use it, then we added a *layer* which insatlled neofetch. If we list our *images*.
|
|
|
|
#+BEGIN_SRC bash :dir /sudo:: :results output
|
|
podman images
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
localhost/neofetch-ubuntu 20.04 6486fa42efe5 5 minutes ago 241 MB
|
|
docker.io/library/ubuntu 20.04 f63181f19b2f 5 weeks ago 75.3 MB
|
|
#+end_example
|
|
|
|
We can see that we have =localhost/neofetch-ubuntu=. If we examine the =ID=, we can see that it is the same as the one given to us at the end of the build.
|
|
|
|
****** Running our container
|
|
|
|
Now that we created a /brand-spanking-new/ image, we can run it.
|
|
|
|
#+BEGIN_SRC bash :dir /sudo:: :results output
|
|
podman images
|
|
#+END_SRC
|
|
|
|
#+begin_example
|
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
|
localhost/neofetch-ubuntu 20.04 6486fa42efe5 6 minutes ago 241 MB
|
|
docker.io/library/ubuntu 20.04 f63181f19b2f 5 weeks ago 75.3 MB
|
|
#+end_example
|
|
|
|
First we list our *images*. Then we choose which one to run.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman run -it neofetch-ubuntu:20.04 neofetch
|
|
#+END_SRC
|
|
|
|
|
|
#+caption: Neofetch on Ubuntu
|
|
#+attr_html: :target _blank
|
|
[[file:images/linux-containers/container-neofetch-ubuntu.png][file:images/linux-containers/container-neofetch-ubuntu.png]]
|
|
=neofetch= is installed in that container, because the *image* has it.
|
|
|
|
We can also build an image based on something else, maybe ~Fedora~ ?
|
|
|
|
I looked in [[https://hub.docker.com/_/fedora/][Dockerhub (Fedora)]] and found the following image.
|
|
|
|
#+BEGIN_SRC dockerfile :tangle /tmp/docker/Dockerfile.fedora
|
|
FROM fedora:32
|
|
|
|
RUN dnf install -y neofetch
|
|
#+END_SRC
|
|
|
|
We can duplicate what we did before real quick. Save file, run command to build the image.
|
|
|
|
#+BEGIN_SRC bash :dir /sudo::/tmp/docker/ :results output
|
|
podman build -t neofetch-fedora:20.04 -f Dockerfile.fedora .
|
|
#+END_SRC
|
|
|
|
#+RESULTS:
|
|
#+begin_example
|
|
STEP 1: FROM fedora:32
|
|
STEP 2: RUN dnf install -y neofetch
|
|
Fedora 32 openh264 (From Cisco) - x86_64 2.2 kB/s | 2.5 kB 00:01
|
|
Fedora Modular 32 - x86_64 4.1 MB/s | 4.9 MB 00:01
|
|
Fedora Modular 32 - x86_64 - Updates 4.9 MB/s | 4.4 MB 00:00
|
|
Fedora 32 - x86_64 - Updates 9.0 MB/s | 29 MB 00:03
|
|
Fedora 32 - x86_64 9.8 MB/s | 70 MB 00:07
|
|
Dependencies resolved.
|
|
========================================================================================
|
|
Package Arch Version Repo Size
|
|
========================================================================================
|
|
Installing:
|
|
neofetch noarch 7.1.0-3.fc32 updates 90 k
|
|
Installing dependencies:
|
|
ImageMagick-libs x86_64 1:6.9.11.27-1.fc32 updates 2.3 M
|
|
LibRaw x86_64 0.19.5-4.fc32 updates 320 k
|
|
...
|
|
xorg-x11-utils x86_64 7.5-34.fc32 fedora 108 k
|
|
|
|
Transaction Summary
|
|
========================================================================================
|
|
Install 183 Packages
|
|
|
|
Total download size: 62 M
|
|
Installed size: 203 M
|
|
Downloading Packages:
|
|
(1/183): LibRaw-0.19.5-4.fc32.x86_64.rpm 480 kB/s | 320 kB 00:00
|
|
...
|
|
xorg-x11-utils-7.5-34.fc32.x86_64
|
|
|
|
Complete!
|
|
STEP 3: COMMIT neofetch-fedora:20.04
|
|
--> a5e57f6d5f1
|
|
a5e57f6d5f13075a105e02000e00589bab50d913900ee60399cd5a092ceca5a3
|
|
#+end_example
|
|
|
|
Then, run the container.
|
|
|
|
#+BEGIN_SRC bash
|
|
podman run -it neofetch-fedora:20.04 neofetch
|
|
#+END_SRC
|
|
|
|
#+caption: Neofetch on Fedora
|
|
#+attr_html: :target _blank
|
|
[[file:images/linux-containers/container-neofetch-fedora.png][file:images/linux-containers/container-neofetch-fedora.png]]
|
|
|
|
**** Conclusion
|
|
|
|
Finally thought /before/ I let you go. You may have noticed that I used =Podman= instead of =Docker=. In these examples, both commands should be interchangeable.
|
|
Remember kids, /containers/ are cool! They can be used for a wide variety of things. They are great at many things and with the help of /container orchestration/ platforms, they can scale better than ever. They are also very bad at certain things. Be careful where to use them, how to use and when to use them. Stay safe and mainly have fun!
|
|
*** DONE Playing with containers and Tor :docker:linux:@text_editors:ubuntu:fedora:proxy:privoxy:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2021-06-21
|
|
:EXPORT_DATE: 2021-06-21
|
|
:EXPORT_FILE_NAME: playing-with-containers-and-tor
|
|
:CUSTOM_ID: playing-with-containers-and-tor
|
|
:END:
|
|
|
|
As my followers well know, by now, I am a tinkerer at heart. Why do I do things ? No one knows ! I don't even know.
|
|
|
|
All I know, all I can tell you is that I like to see what can I do with the tools I have at hand. How can I bend them to my will.
|
|
Why, you may ask. The answer is a bit complicated; part of who I am, part of what I do as a DevOps. End line is, this time I was curious.
|
|
|
|
I went down a road that taught me so much more about /containers/, /docker/, /docker-compose/ and even /Linux/ itself.
|
|
|
|
The question I had was simple, *can I run a container only through Tor running in another container?*
|
|
#+hugo: more
|
|
|
|
**** Tor
|
|
|
|
I usually like to start topics that I haven't mentioned before with definitions. In this case, what is [[https://2019.www.torproject.org/index.html.en][Tor]], you may ask ?
|
|
|
|
#+begin_quote
|
|
Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.
|
|
#+end_quote
|
|
|
|
Although that /home/ page is obscure because it was replaced by the new /design/ of the website.
|
|
Although I love what *Tor* has done with all the services they offer, don't get me wrong.
|
|
But giving so much importance on the browser only and leaving the rest for dead when it comes to website, I have to say, I'm a bit sad.
|
|
|
|
Anyway, let's share the love for *Tor* and thank them for the beautiful project they offered humanity.
|
|
|
|
Now that we thanked them, let's abuse it.
|
|
|
|
***** Tor in a container
|
|
|
|
The task I set to discover relied on *Tor* being containerized.
|
|
The first thing I do is, simply, not re-invent the wheel.
|
|
Let's find out if someone already took that task.
|
|
|
|
With a litte bit of search, I found the [[https://hub.docker.com/r/dperson/torproxy][dperson/torproxy]] docker image.
|
|
It isn't ideal but I /believe/ it is written to be rebuilt.
|
|
|
|
Can we run it ?
|
|
|
|
#+begin_src bash
|
|
docker run -it -p 127.0.0.1:8118:8118 -d dperson/torproxy
|
|
#+end_src
|
|
|
|
#+begin_src bash
|
|
curl -Lx http://localhost:8118 http://jsonip.com/
|
|
#+end_src
|
|
|
|
And this is *definitely* not your IP. Don't take /my word/ for it!
|
|
Go to [[http://jsonip.com/][http://jsonip.com/]] in a browser and see for yourself.
|
|
|
|
Now that we *know* we can run *Tor* in a container effectively, let's kick it up a /notch/.
|
|
|
|
**** docker-compose
|
|
|
|
I will be /testing/ and making changes as I go along. For this reason, it's a good idea to use [[https://docs.docker.com/compose/][docker-compose]] to do this.
|
|
|
|
#+begin_quote
|
|
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
|
|
#+end_quote
|
|
/Now/ that we saw what the *docker* team has to say about *docker-compose*, let's go ahead and use it.
|
|
|
|
First, let's implement what we just ran /ad-hoc/ in *docker-compose*.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
version: '3.9'
|
|
services:
|
|
torproxy:
|
|
image: dperson/torproxy
|
|
container_name: torproxy
|
|
restart: unless-stopped
|
|
#+end_src
|
|
|
|
**** Air-gapped container
|
|
|
|
The next piece of the puzzle is to figure out *if* and *how* can we create an /air-gapped container/.
|
|
|
|
It turns out, we can create an =internal= network in /docker/ that has no access to the internet.
|
|
|
|
First, the /air-gapped container/.
|
|
|
|
#+begin_src yaml
|
|
air-gapped:
|
|
image: ubuntu
|
|
container_name: air-gapped
|
|
restart: unless-stopped
|
|
command:
|
|
- bash
|
|
- -c
|
|
- sleep infinity
|
|
networks:
|
|
- no-internet
|
|
#+end_src
|
|
|
|
Then comes the network.
|
|
|
|
#+begin_src yaml
|
|
networks:
|
|
no-internet:
|
|
driver: bridge
|
|
internal: true
|
|
#+end_src
|
|
|
|
|
|
Let's put it all together in a =docker-compose.yaml= file and run it.
|
|
|
|
#+begin_src bash
|
|
docker-compose up -d
|
|
#+end_src
|
|
|
|
Keep that terminal open, and let's put the /hypothesis/ to the test and see if rises up to be a /theory/.
|
|
|
|
#+begin_src bash :results output
|
|
docker exec air-gapped apt-get update
|
|
#+end_src
|
|
|
|
Aaaaand...
|
|
|
|
#+begin_src text
|
|
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
|
|
Temporary failure resolving 'archive.ubuntu.com'
|
|
Err:2 http://security.ubuntu.com/ubuntu focal-security InRelease
|
|
Temporary failure resolving 'security.ubuntu.com'
|
|
Err:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
|
|
Temporary failure resolving 'archive.ubuntu.com'
|
|
Err:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease
|
|
Temporary failure resolving 'archive.ubuntu.com'
|
|
Reading package lists...
|
|
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal/InRelease Temporary failure resolving 'archive.ubuntu.com'
|
|
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease Temporary failure resolving 'archive.ubuntu.com'
|
|
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease Temporary failure resolving 'archive.ubuntu.com'
|
|
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease Temporary failure resolving 'security.ubuntu.com'
|
|
W: Some index files failed to download. They have been ignored, or old ones used instead.
|
|
#+end_src
|
|
|
|
looks like it's real peeps, *hooray* !
|
|
|
|
**** Putting everything together
|
|
|
|
Okay, now let's put everything together. The list of changes we need to make are minimal.
|
|
First, I will list them, then I will simply write them out in *docker-compose*.
|
|
|
|
- Create an =internet= network for the *Tor* container
|
|
- Attach the =internet= network to the *Tor* container
|
|
- Attach the =no-internet= network to the *Tor* container so that our /air-gapped/ container can access it.
|
|
|
|
Let's get to work.
|
|
|
|
#+begin_src yaml :tangle docker-compose.yaml
|
|
---
|
|
version: '3.9'
|
|
services:
|
|
|
|
torproxy:
|
|
image: dperson/torproxy
|
|
container_name: torproxy
|
|
restart: unless-stopped
|
|
networks:
|
|
- no-internet
|
|
- internet
|
|
|
|
air-gapped:
|
|
image: ubuntu
|
|
container_name: air-gapped
|
|
restart: unless-stopped
|
|
command:
|
|
- bash
|
|
- -c
|
|
- sleep infinity
|
|
networks:
|
|
- no-internet
|
|
|
|
networks:
|
|
no-internet:
|
|
driver: bridge
|
|
internal: true
|
|
internet:
|
|
driver: bridge
|
|
internal: false
|
|
#+end_src
|
|
|
|
Run everything.
|
|
|
|
#+begin_src bash :results output
|
|
docker-compose up -d
|
|
#+end_src
|
|
|
|
Yes, this will run it in the background and there is *no* need for you to open another terminal.
|
|
It's always /good/ to know *both* ways. Anyway, let's test.
|
|
|
|
let's =exec= into the container.
|
|
|
|
#+begin_src bash
|
|
docker exec -it air-gapped bash
|
|
#+end_src
|
|
|
|
Then we configure =apt= to use our =torproxy= service.
|
|
|
|
#+begin_src bash :dir /docker:air-gapped:/
|
|
echo 'Acquire::http::Proxy "http://torproxy:8118/";' > /etc/apt/apt.conf.d/proxy
|
|
echo "export HTTP_PROXY=http://torproxy:8118/" >> ~/.bashrc
|
|
echo "export HTTPS_PROXY=http://torproxy:8118/" >> ~/.bashrc
|
|
export HTTP_PROXY=http://torproxy:8118/
|
|
export HTTPS_PROXY=http://torproxy:8118/
|
|
apt-get update
|
|
apt-get upgrade -y
|
|
DEBIAN_FRONTEND=noninteractive apt-get install -y curl
|
|
#+end_src
|
|
|
|
**** Harvesting the fruits of our labour
|
|
|
|
First, we *always* check if everything is set correctly.
|
|
|
|
While inside the container, we check the /environment variables/.
|
|
|
|
#+begin_src bash :dir /docker:air-gapped:/
|
|
env | grep HTTP
|
|
#+end_src
|
|
|
|
You should see.
|
|
|
|
#+begin_example
|
|
HTTPS_PROXY=http://torproxy:8118/
|
|
HTTP_PROXY=http://torproxy:8118/
|
|
#+end_example
|
|
|
|
Then, we curl our *IP*.
|
|
|
|
#+begin_src bash :dir /docker:air-gapped:/
|
|
curl https://jsonip.com/
|
|
#+end_src
|
|
|
|
And that is also not your *IP*.
|
|
|
|
It works !
|
|
|
|
**** Conclusion
|
|
|
|
Is it possible to route a container through another *Tor* container ?
|
|
|
|
The answer is /obviously/ *Yes* and this is the way to do it. Enjoy.
|
|
|
|
*** DONE Let's play with Traefik :docker:linux:traefik:nginx:ssl:letsencrypt:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2021-06-24
|
|
:EXPORT_DATE: 2021-06-24
|
|
:EXPORT_FILE_NAME: let-s-play-with-traefik
|
|
:CUSTOM_ID: let-s-play-with-traefik
|
|
:END:
|
|
|
|
I've been playing around with containers for a few years now. I find them very useful.
|
|
If you host your own, like I do, you probably write a lot of /nginx/ configurations, maybe /apache/.
|
|
|
|
If that's the case, then you have your own solution to get certificates.
|
|
I'm also assuming that you are using /let's encrypt/ with /certbot/ or something.
|
|
|
|
Well, I didn't want to anymore. It was time to consolidate. Here comes Traefik.
|
|
#+hugo: more
|
|
|
|
**** Traefik
|
|
|
|
So [[https://doc.traefik.io/traefik/][Traefik]] is
|
|
|
|
#+begin_quote
|
|
an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
|
|
#+end_quote
|
|
|
|
Which made me realize, I still need /nginx/ somewhere. We'll see when we get to it. Let's focus on /Traefik/.
|
|
|
|
***** Configuration
|
|
|
|
If you run a lot of containers and manage them, then you probably use /docker-compose/.
|
|
|
|
I'm still using =version 2.3=, I know I am due to an upgrade but I'm working on it slowly.
|
|
It's a bigger project... One step at a time.
|
|
|
|
Let's start from the top, literally.
|
|
|
|
#+NAME: docker-compose-header
|
|
#+begin_src yaml
|
|
---
|
|
version: '2.3'
|
|
|
|
services:
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
Upgrading to =version 3.x= of /docker-compose/ requires the creation of /network/ to /link/ containers together. It's worth investing into, this is not a /docker-compose/ tutorial.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Then comes the service.
|
|
|
|
#+NAME: docker-compose-service-traefik
|
|
#+begin_src yaml
|
|
traefik:
|
|
container_name: traefik
|
|
image: "traefik:latest"
|
|
restart: unless-stopped
|
|
mem_limit: 40m
|
|
mem_reservation: 25m
|
|
#+end_src
|
|
|
|
and of course, who can forget the volume mounting.
|
|
|
|
#+NAME: docker-compose-traefik-volumes
|
|
#+begin_src yaml
|
|
volumes:
|
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
|
#+end_src
|
|
|
|
***** Design
|
|
|
|
Now let's talk design to see how we're going to configuse this bad boy.
|
|
|
|
I want to /Traefik/ to listen on ports =80= and =443= at a minimum to serve traffic.
|
|
Let's do that.
|
|
|
|
#+NAME: docker-compose-traefik-config-listeners
|
|
#+begin_src yaml
|
|
command:
|
|
- --entrypoints.web.address=:80
|
|
- --entrypoints.websecure.address=:443
|
|
#+end_src
|
|
|
|
and let's not forget to map them.
|
|
|
|
#+NAME: docker-compose-traefik-port-mapping
|
|
#+begin_src yaml
|
|
ports:
|
|
- "80:80"
|
|
- "443:443"
|
|
#+end_src
|
|
|
|
Next, we would like to redirect =http= to =https= always.
|
|
|
|
#+NAME: docker-compose-traefik-config-https-redirect
|
|
#+begin_src yaml
|
|
- --entrypoints.web.http.redirections.entryPoint.to=websecure
|
|
- --entrypoints.web.http.redirections.entryPoint.scheme=https
|
|
#+end_src
|
|
|
|
We are using docker, so let's configure that as the provider.
|
|
|
|
#+NAME: docker-compose-traefik-config-provider
|
|
#+begin_src yaml
|
|
- --providers.docker
|
|
#+end_src
|
|
|
|
We can set the log level.
|
|
|
|
#+NAME: docker-compose-traefik-config-log-level
|
|
#+begin_src yaml
|
|
- --log.level=INFO
|
|
#+end_src
|
|
|
|
If you want a /dashboard/, you have to enable it.
|
|
|
|
#+NAME: docker-compose-traefik-config-dashboard
|
|
#+begin_src yaml
|
|
- --api.dashboard=true
|
|
#+end_src
|
|
|
|
And finally, if you're using Prometheus to scrape metrics... You have to enable that too.
|
|
|
|
#+NAME: docker-compose-traefik-config-prometheus
|
|
#+begin_src yaml
|
|
- --metrics.prometheus=true
|
|
#+end_src
|
|
|
|
***** Let's Encrypt
|
|
|
|
Let's talk *TLS*. You want to serve encrypted traffic to users. You will need an /SSL Certificate/.
|
|
|
|
Your best bet is /open source/. Who are we kidding, you'd want to go with /let's encrypt/.
|
|
|
|
Let's configure /acme/ to do just that. Get us certificates. In this example, we are going to be using /Cloudflare/.
|
|
|
|
#+NAME: docker-compose-traefik-config-acme
|
|
#+begin_src yaml
|
|
- --certificatesresolvers.cloudflareresolver.acme.email=<your@email.here>
|
|
- --certificatesresolvers.cloudflareresolver.acme.dnschallenge.provider=cloudflare
|
|
- --certificatesresolvers.cloudflareresolver.acme.storage=./acme.json
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title"><b>Warning</b></p>
|
|
#+END_EXPORT
|
|
/Let's Encrypt/ have set limits on *how many* certificates you can request per certain amount of time. To test your certificate request and renewal processes, use their staging infrastructure. It is made for such purpose.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Then we mount it, for persistence.
|
|
|
|
#+NAME: docker-compose-traefik-volumes-acme
|
|
#+begin_src yaml
|
|
- "./traefik/acme.json:/acme.json"
|
|
#+end_src
|
|
|
|
Let's not forget to add our /Cloudflare/ *API* credentials as environment variables for /Traefik/ to use.
|
|
|
|
#+NAME: docker-compose-traefik-environment
|
|
#+begin_src yaml
|
|
environment:
|
|
- CLOUDFLARE_EMAIL=<your-cloudflare@email.here>
|
|
- CLOUDFLARE_API_KEY=<your-api-key-goes-here>
|
|
#+end_src
|
|
|
|
***** Dashboard
|
|
|
|
Now let's configure /Traefik/ a bit more with a bit of labeling.
|
|
|
|
First, we specify the /host/ /Traefik/ should listen for to service the /dashboard/.
|
|
|
|
#+NAME: docker-compose-traefik-labels
|
|
#+begin_src yaml
|
|
labels:
|
|
- "traefik.http.routers.dashboard-api.rule=Host(`dashboard.your-host.here`)"
|
|
- "traefik.http.routers.dashboard-api.service=api@internal"
|
|
#+end_src
|
|
|
|
With a little bit of /Traefik/ documentation searching and a lot of help from =htpasswd=, we can create a =basicauth= login to protect the dashboard from public use.
|
|
|
|
#+NAME: docker-compose-traefik-labels-basicauth
|
|
#+begin_src yaml
|
|
- "traefik.http.routers.dashboard-api.middlewares=dashboard-auth-user"
|
|
- "traefik.http.middlewares.dashboard-auth-user.basicauth.users=<user>:$$pws5$$rWsEfeUw9$$uV45uwsGeaPbu8RSexB9/"
|
|
- "traefik.http.routers.dashboard-api.tls.certresolver=cloudflareresolver"
|
|
#+end_src
|
|
|
|
***** Middleware
|
|
|
|
I'm not going to go into details about the /middleware/ flags configured here but you're welcome to check the /Traefik/ middleware [[https://doc.traefik.io/traefik/middlewares/overview/][docs]].
|
|
|
|
#+NAME: docker-compose-traefik-config-middleware
|
|
#+begin_src yaml
|
|
- "traefik.http.middlewares.frame-deny.headers.framedeny=true"
|
|
- "traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true"
|
|
- "traefik.http.middlewares.ssl-redirect.headers.sslredirect=true"
|
|
#+end_src
|
|
|
|
***** Full Configuration
|
|
|
|
Let's put everything together now.
|
|
|
|
#+NAME: docker-compose-traefik
|
|
#+begin_src yaml :noweb yes
|
|
<<docker-compose-service-traefik>>
|
|
<<docker-compose-traefik-port-mapping>>
|
|
<<docker-compose-traefik-config-listeners>>
|
|
<<docker-compose-traefik-config-https-redirect>>
|
|
<<docker-compose-traefik-config-provider>>
|
|
<<docker-compose-traefik-config-log-level>>
|
|
<<docker-compose-traefik-config-dashboard>>
|
|
<<docker-compose-traefik-config-prometheus>>
|
|
<<docker-compose-traefik-config-acme>>
|
|
<<docker-compose-traefik-volumes>>
|
|
<<docker-compose-traefik-volumes-acme>>
|
|
<<docker-compose-traefik-environment>>
|
|
<<docker-compose-traefik-labels>>
|
|
<<docker-compose-traefik-labels-basicauth>>
|
|
<<docker-compose-traefik-config-middleware>>
|
|
#+end_src
|
|
|
|
**** nginx
|
|
|
|
[[https://nginx.org/en/][nginx]] pronounced
|
|
|
|
#+begin_quote
|
|
[engine x] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by Igor Sysoev.
|
|
#+end_quote
|
|
|
|
In this example, we're going to assume you have a /static blog/ generated by a /static blog generator/ of your choice and you would like to serve it for people to read it.
|
|
|
|
So let's do this quickly as there isn't much to tell except when it comes to labels.
|
|
|
|
#+NAME: docker-compose-service-nginx
|
|
#+begin_src yaml
|
|
nginx:
|
|
container_name: nginx
|
|
image: nginxinc/nginx-unprivileged:alpine
|
|
restart: unless-stopped
|
|
mem_limit: 8m
|
|
command: ["nginx", "-enable-prometheus-metrics", "-g", "daemon off;"]
|
|
volumes:
|
|
- "./blog/:/usr/share/nginx/html/blog:ro"
|
|
- "./nginx/default.conf.template:/etc/nginx/templates/default.conf.template:ro"
|
|
environment:
|
|
- NGINX_BLOG_PORT=80
|
|
- NGINX_BLOG_HOST=<blog.your-host.here>
|
|
#+end_src
|
|
|
|
We are mounting the blog directory from our /host/ to =/usr/share/nginx/html/blog= as *read-only* into the /nginx/ container. We are also providing /nginx/ with a template configuration and passing the variables as /environment/ variables as you noticed. It is also mounted as *read-only*. The configuration template looks like the following, if you're wondering.
|
|
|
|
#+begin_src nginx
|
|
server {
|
|
|
|
listen ${NGINX_BLOG_PORT};
|
|
server_name localhost;
|
|
|
|
root /usr/share/nginx/html/${NGINX_BLOG_HOST};
|
|
|
|
location / {
|
|
index index.html;
|
|
try_files $uri $uri/ =404;
|
|
}
|
|
}
|
|
#+end_src
|
|
|
|
***** Traefik configuration
|
|
|
|
So, /Traefik/ configuration at this point is a little bit tricky for the first time.
|
|
|
|
First, we configure the /host/ like we did before.
|
|
|
|
#+NAME: docker-compose-nginx-labels
|
|
#+begin_src yaml
|
|
labels:
|
|
- "traefik.http.routers.blog-http.rule=Host(`blog.your-host.here`)"
|
|
#+end_src
|
|
|
|
We tell /Traefik/ about our service and the /port/ to loadbalance on.
|
|
|
|
#+NAME: docker-compose-nginx-labels-service
|
|
#+begin_src yaml
|
|
- "traefik.http.routers.blog-http.service=blog-http"
|
|
- "traefik.http.services.blog-http.loadbalancer.server.port=80"
|
|
#+end_src
|
|
|
|
We configure the /middleware/ to use configuration defined in the /Traefik/ middleware configuration section.
|
|
|
|
#+NAME: docker-compose-nginx-labels-middleware
|
|
#+begin_src yaml
|
|
- "traefik.http.routers.blog-http.middlewares=blog-main"
|
|
- "traefik.http.middlewares.blog-main.chain.middlewares=frame-deny,browser-xss-filter,ssl-redirect"
|
|
#+end_src
|
|
|
|
Finally, we tell it about our resolver to generate an /SSL Certificate/.
|
|
|
|
#+NAME: docker-compose-nginx-labels-tls
|
|
#+begin_src yaml
|
|
- "traefik.http.routers.blog-http.tls.certresolver=cloudflareresolver"
|
|
#+end_src
|
|
|
|
***** Full Configuration
|
|
|
|
Let's put the /nginx/ service together.
|
|
|
|
#+NAME: docker-compose-nginx
|
|
#+begin_src yaml :noweb yes
|
|
<<docker-compose-service-nginx>>
|
|
<<docker-compose-nginx-labels>>
|
|
<<docker-compose-nginx-labels-service>>
|
|
<<docker-compose-nginx-labels-middleware>>
|
|
<<docker-compose-nginx-labels-tls>>
|
|
#+end_src
|
|
|
|
**** Finale
|
|
|
|
It's finally time to put everything together !
|
|
|
|
#+begin_src yaml :noweb yes
|
|
<<docker-compose-header>>
|
|
|
|
<<docker-compose-traefik>>
|
|
|
|
<<docker-compose-nginx>>
|
|
#+end_src
|
|
|
|
Now we're all set to save it in a =docker-compose.yaml= file and
|
|
|
|
#+begin_src bash
|
|
docker-compose up -d
|
|
#+end_src
|
|
|
|
If everything is configured correctly, your blog should pop-up momentarily.
|
|
*Enjoy !*
|
|
|
|
*** DONE Time to deploy our static blog :docker:dockerfile:linux:traefik:nginx:ssl:letsencrypt:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2021-07-10
|
|
:EXPORT_DATE: 2021-07-10
|
|
:EXPORT_FILE_NAME: time-to-deploy-our-static-blog
|
|
:CUSTOM_ID: time-to-deploy-our-static-blog
|
|
:END:
|
|
|
|
In the previous post, entitled "[[#let-s-play-with-traefik]]", we deployed
|
|
/Traefik/ and configured it. We left it in a running state but we haven't
|
|
/really/ used it properly yet.
|
|
|
|
Let's put it to some good use this time around.
|
|
#+hugo: more
|
|
|
|
**** Pre-requisites
|
|
|
|
This blog post assumes that you already have a generated static /website/ or
|
|
/blog/. There are multiple tools in the sphere which allows you to statically
|
|
generate your blog.
|
|
|
|
You can find a list of them on the
|
|
[[https://github.com/myles/awesome-static-generators][Awesome Static Web Site
|
|
Generators]].
|
|
|
|
Once we have the directory on disk, we can move forward.
|
|
|
|
**** Components
|
|
|
|
Let's talk components a tiny bit and see what we have and what we need. We
|
|
already a /static site/. We can expose our /site/ using /Traefik/. We can also
|
|
generate an /SSL certificate/ for the exposed /site/.
|
|
|
|
What we don't have, is a way to /serve/ our /static site/. /Traefik/ is only a
|
|
/reverse proxy/ server. A /reverse proxy/, sort of, routes into and out of
|
|
sockets. These sockets could be open local ports, or they could, also, be other
|
|
containers.
|
|
|
|
**** Nginx
|
|
|
|
That's where [[https://nginx.org/][/nginx/]] comes into the picture.
|
|
|
|
#+begin_quote
|
|
nginx [engine x] is an HTTP and reverse proxy server, a mail proxy server, and a
|
|
generic TCP/UDP proxy server, originally written by Igor Sysoev.
|
|
#+end_quote
|
|
|
|
We can find an /nginx/ docker image on
|
|
[[https://hub.docker.com/_/nginx][dockerhub]]. But, if we look around carefully
|
|
we can see a section that mentions "/running nginx as a non-root user/". This
|
|
led me to a small discovery which made me look for an alternative of that image.
|
|
|
|
Luckily for us, /nginxinc/ also releases an /unprivileged/ version of that image
|
|
under the name of [[https://hub.docker.com/r/nginxinc/nginx-unprivileged][nginx-unprivileged]].
|
|
|
|
***** Configuration
|
|
|
|
The /nginx/ docker image can be configured using a /template/ configuration file
|
|
which can be mounted into the container.
|
|
|
|
The configuration can include /variables/ which will be replaced by /environment
|
|
variables/ we inject into the container.
|
|
|
|
Let's look at an example configuration =default.conf.template=.
|
|
|
|
#+begin_src conf
|
|
server {
|
|
|
|
listen ${NGINX_BLOG_PORT};
|
|
server_name localhost;
|
|
|
|
root /usr/share/nginx/html/${NGINX_BLOG_HOST};
|
|
|
|
location / {
|
|
index index.html;
|
|
|
|
try_files $uri $uri/ =404;
|
|
}
|
|
}
|
|
#+end_src
|
|
|
|
In the example above, we use ~NGINX_BLOG_HOST~ and ~NGINX_BLOG_PORT~ as
|
|
/environment variables/ to be replaced in the /nginx/ configuration.
|
|
|
|
**** Container
|
|
|
|
After creating our /nginx/ configuration, we need to run an /nginx/ container
|
|
and serve our blog to the users.
|
|
|
|
In the [[#let-s-play-with-traefik][previous post]], we used /docker-compose/ to
|
|
deploy /Traefik/. We will continue with that and deploy our /nginx/ container
|
|
alongside.
|
|
|
|
***** docker-compose
|
|
|
|
Before we go ahead and create another service in the /docker-compose/ file,
|
|
let's talk a bit about what we need.
|
|
|
|
We need to deploy an /unprivileged nginx/ container, first and foremost. We need
|
|
to inject a few /environment variables/ into the container to be included in the
|
|
/nginx/ templated configuration. We, also, need not forget to include the
|
|
/labels/ required for /Traefik/ to route our container properly, and generate an
|
|
/SSL certificate/. Finally, we need to mount both the /nginx configuration
|
|
template/ and, of course, our /static blog/.
|
|
|
|
Now let's head to work.
|
|
|
|
#+begin_src yaml
|
|
nginx:
|
|
container_name: nginx
|
|
image: nginxinc/nginx-unprivileged:alpine
|
|
restart: unless-stopped
|
|
mem_limit: 8m
|
|
command: ["nginx", "daemon off;"]
|
|
volumes:
|
|
- "./blog/static/:/usr/share/nginx/html/blog:ro"
|
|
- "./blog/nginx/default.conf.template:/etc/nginx/templates/default.conf.template:ro"
|
|
environment:
|
|
- NGINX_BLOG_PORT=80
|
|
- NGINX_BLOG_HOST=blog.example.com
|
|
labels:
|
|
- "traefik.http.routers.blog-http.rule=Host(`blog.example.com`)"
|
|
- "traefik.http.routers.blog-http.service=blog-http"
|
|
- "traefik.http.services.blog-http.loadbalancer.server.port=80"
|
|
- "traefik.http.routers.blog-http.middlewares=blog-main"
|
|
- "traefik.http.middlewares.blog-main.chain.middlewares=frame-deny,browser-xss-filter,ssl-redirect"
|
|
- "traefik.http.middlewares.frame-deny.headers.framedeny=true"
|
|
- "traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true"
|
|
- "traefik.http.middlewares.ssl-redirect.headers.sslredirect=true"
|
|
- "traefik.http.routers.blog-http.tls.certresolver=cloudflareresolver"
|
|
#+end_src
|
|
|
|
If we look at the /Traefik/ configuration we can see the following important configurations.
|
|
|
|
- =traefik.http.routers.blog-http.rule= :: This configures the ~hostname~ /Traefik/ should be listening on for our /nginx/ container.
|
|
- =traefik.http.routers.blog-http.service= :: This configures the /router/ to
|
|
use our /service/.
|
|
- =traefik.http.services.blog-http.loadbalancer.server.port= :: We configure the /service/ ~port~.
|
|
- =traefik.http.routers.blog-http.middlewares= :: We configure the /router/ to
|
|
use our ~middleware~.
|
|
- =traefik.http.middlewares.blog-main.chain.middlewares= :: We configure all the ~middleware~ chain.
|
|
- =traefik.http.middlewares.ssl-redirect.headers.sslredirect= :: We always
|
|
redirect ~http~ to ~https~.
|
|
- =traefik.http.routers.blog-http.tls.certresolver= :: We configure the /resolver/ to use to generate our /SSL certificate/.
|
|
|
|
We can also see our /static blog/ and the /nginx template/ being mounted as /read-only/ inside the container to their right paths. Finally, we verify that
|
|
our ~NGINX_BLOG_HOST~ and ~NGINX_BLOG_PORT~ are configured correctly.
|
|
|
|
**** Final steps
|
|
|
|
After putting everything in place, we do a quick last check that everything is
|
|
correctly in place. Once we are satisfied with the results, we run !
|
|
|
|
#+begin_src shell
|
|
docker-compose up -d
|
|
#+end_src
|
|
|
|
And we're good to go.
|
|
|
|
If we point our ~/etc/hosts~ to our site, we can test that everything works.
|
|
|
|
#+begin_src conf
|
|
192.168.0.1 blog.example.com
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
Replace ~192.168.0.1~ with your public server's IP address. This is an example
|
|
of an IP unroutable on the internet.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
If everything is configured properly, we should see our /site/ pop up
|
|
momentarily. The /SSL certificate/ will fail for a few minutes until /Traefik/
|
|
is able to generate a new one and serve it. Give it some time.
|
|
|
|
Once everything up and running, you can enjoy your /blog/ being served by
|
|
/Traefik/ through an /nginx/ container.
|
|
|
|
**** Conclusion
|
|
|
|
You can serve your /static blog/ with /Traefik/ and /nginx/ easily. Make sure to
|
|
take the necessary measures to run container /safely/ and it should be easy as pie.
|
|
/Traefik/ makes it possible to route to multiple containers this way, allowing
|
|
us to add more services to the /docker-compose/ file. At the same time, /nginx/,
|
|
with the /templating feature/, offers us another flexible way to serve a big
|
|
variety of /static sites/. Using them in combination open a wide range of
|
|
possibilities.
|
|
|
|
*** DONE Raspberry Pi, Container Orchestration and Swarm right at home :docker:linux:arm:ansible:swarm:raspberry_pi:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2022-08-25
|
|
:EXPORT_DATE: 2022-08-24
|
|
:EXPORT_FILE_NAME: raspberry-pi-container-orchestration-and-swarm-right-at-home
|
|
:CUSTOM_ID: raspberry-pi-container-orchestration-and-swarm-right-at-home
|
|
:END:
|
|
|
|
When I started looking into solutions for my home container orchestration, I
|
|
wanted a solution that runs on my 2 Raspberry Pis. These beasts have 4 virtual
|
|
CPUs and a whoping 1GB of memory each. In other words, not a lot of resources to
|
|
go around. What can I run on these? I wonder!
|
|
|
|
#+hugo: more
|
|
|
|
**** Consideration
|
|
If we look at the state of /container orchestration/ today, we see that
|
|
/Kubernetes/ domates the space. /Kubernetes/ is awesome, but will it run on my
|
|
Pis ? I doubt it.
|
|
|
|
Fret not ! There are other, /more lightweight/, solutions out there. Let's
|
|
discuss them briefly.
|
|
|
|
***** K3s
|
|
I have experience with /K3s/. I even wrote a blog [[#building-k3s-on-a-pi][post]] on it. Unfortunately, I
|
|
found that /K3s/ uses almost half of the memory resources of the Pis to run.
|
|
That's too much overhead lost.
|
|
|
|
***** MicroK8s
|
|
/MicroK8s/ is a Canonical project. It has similarities to /K3s/ in the way of
|
|
easy deployment and lightweight focus. The end result is also extremly similar
|
|
to /K3s/ in resource usage.
|
|
|
|
***** Nomad
|
|
/Nomad/ is a /HashiCorp/ product and just all their other products, it is very
|
|
well designed, very robust and extremly versatile. Running it on the Pis was a
|
|
breeze, it barely used any resources.
|
|
|
|
It shoulds great so far, doesn't it ? Well, sort of. The deployment and
|
|
configuration of /Nomad/ is a bit tricky and requires a bit of moving
|
|
components. Those can be automated with /Ansible/ eventually. Aside that,
|
|
/Nomad/ requires extra configuration to install and enable CNI and service
|
|
discovery.
|
|
|
|
Finally, it has a steep learning curve to deploy containers in the cluster and
|
|
you have HCL to deal with.
|
|
|
|
***** Swarm
|
|
I was surprised to find that not only /Docker Swarm/ is still alive, it also
|
|
became a mode which comes preshipped with /docker/ since a few years ago.
|
|
|
|
I also found out that /Swarm/ has great /Ansible/ integration, for both
|
|
initializing and creating the cluster and deploying /stacks/ and /services/ into
|
|
it. After all, if you are already familiar with /docker-compose/, you'll feel
|
|
right at home.
|
|
|
|
**** Setting up a Swarm cluster
|
|
I set up to deploy my /Swarm Cluster/ and manage it using /Ansible/. I didn't
|
|
want to do the work again in the future and I wanted to go the IaC
|
|
(/Infrastructure as Code/) route, as should you.
|
|
|
|
At this stage, I have to take a few assumptions. I assume that you already have
|
|
at least 2 machines with a Linux Distribution installed on them. I, also, assume
|
|
that /docker/ is already installed and running on both machines. Finally, all
|
|
the dependencies required to run /Ansible/ on both hosts (~python3-docker~ and
|
|
~python3-jsondiff~ on /Ubuntu/).
|
|
|
|
There are *two* types of /nodes/ in a /Swarm/ cluster; ~manager~ and ~worker~.
|
|
The *first* node used to initialize the cluster is the /leader/ node which is
|
|
also a ~manager~ node.
|
|
|
|
***** Leader
|
|
For the ~leader~ node, our tasks are going to be initializing the cluster.
|
|
|
|
Before we do so, let's create our /quick and dirty/ *Ansible* ~inventory~ file.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
all:
|
|
hosts:
|
|
children:
|
|
leader:
|
|
hosts:
|
|
node001:
|
|
ansible_host: 192.168.0.100
|
|
ansible_user: user
|
|
ansible_port: 22
|
|
ansible_become: yes
|
|
ansible_become_method: sudo
|
|
manager:
|
|
worker:
|
|
hosts:
|
|
node002:
|
|
ansible_host: 192.168.0.101
|
|
ansible_user: user
|
|
ansible_port: 22
|
|
ansible_become: yes
|
|
ansible_become_method: sudo
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title">warning</p>
|
|
#+END_EXPORT
|
|
This isn't meant to be deployed in *production* in a /professional/ setting. It
|
|
goes without saying, the ~leader~ is static, not highly available and prone to
|
|
failure. The ~manager~ and ~worker~ node tasks are, also, dependent on the
|
|
successful run of the initialization task on the ~leader~.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Now that we've taken care of categorizing the nodes and writing the /Ansible/
|
|
~inventory~, let's initialize a /Swarm/ cluster.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Init a new swarm cluster
|
|
community.docker.docker_swarm:
|
|
state: present
|
|
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
|
|
register: clustering_swarm_cluster
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
We use ~hostvars[inventory_hostname]['ansible_default_ipv4']['address']~ which
|
|
returns the IP address of the node itself. This is the IP adress used to advertise.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
We use ~register~ to save the returned response from the cluster initialization
|
|
into a new variable we called ~clustering_swarm_cluster~. This will come handy later.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
This should take care of initializing a new /Swarm/ cluster.
|
|
|
|
You can verify if /Swarm/ is running.
|
|
|
|
#+begin_src shell
|
|
$ docker system info 2>&1 | grep Swarm
|
|
Swarm: active
|
|
#+end_src
|
|
|
|
***** Manager
|
|
If you have a larger number of nodes, you might require more than one ~manager~
|
|
node. To join more /managers/ to the cluster, we can use the power of /Ansible/ again.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Add manager node to Swarm cluster
|
|
community.docker.docker_swarm:
|
|
state: join
|
|
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
|
|
join_token: "{{ hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Manager'] }}"
|
|
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['ansible_default_ipv4']['address'] }}:2377" ]
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
We access the token we saved earlier on the ~leader~ to join a ~manager~ to the cluster using ~hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Manager']~.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
If we can get a hostvar from a different node, we can also get the IP of such
|
|
node with ~hostvars[groups['leader'][0]]['ansible_default_ipv4']['address']~.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Now that we've taken care of the ~manager~ code, let's work on the ~worker~ nodes.
|
|
|
|
***** Worker
|
|
Just as easily as we created the /task/ to *join* a ~manager~ node to the cluster,
|
|
we do the same for the ~worker~.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Add worker node to Swarm cluster
|
|
community.docker.docker_swarm:
|
|
state: join
|
|
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
|
|
join_token: "{{ hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Worker'] }}"
|
|
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['ansible_default_ipv4']['address'] }}:2377" ]
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
Déjà vu when it comes to the ~join_token~, except that we use the ~worker~ token instead.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
The /glue code/ you're looking for that does the magic is this.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Bootstrap Swarm depedencies
|
|
include_tasks: common.yml
|
|
|
|
- name: Bootstrap leader node
|
|
include_tasks: leader.yml
|
|
when: inventory_hostname in groups['leader']
|
|
|
|
- name: Bootstrap manager node
|
|
include_tasks: manager.yml
|
|
when: inventory_hostname in groups['manager']
|
|
|
|
- name: Bootstrap worker node
|
|
include_tasks: worker.yml
|
|
when: inventory_hostname in groups['worker']
|
|
#+end_src
|
|
|
|
Each of the tasks described above should be in its own file, as shown in the
|
|
/glue code/, and they will *only* run on the group they are meant to run on.
|
|
|
|
Following these tasks, I ended up with the cluster below.
|
|
|
|
#+begin_src shell
|
|
# docker node ls
|
|
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
|
|
h4scu4nry2r9p129rsdt88ae2 * node001 Ready Active Leader 20.10.17
|
|
uyn43a9tsdn2n435untva9pae node002 Ready Active 20.10.17
|
|
#+end_src
|
|
|
|
There, we see both nodes and they both seem to be in a ~Ready~ state.
|
|
|
|
**** Conclusion
|
|
If you're /outside/ a professional setting and you find yourself needing to run a
|
|
container orchestration platform, some platforms might be overkill. /Docker
|
|
Swarm/ has great community support in /Ansible/ making the management of small
|
|
clusters on low resource devices extremly easy. It comes with the added bonus of
|
|
having built-in /service discovery/ and /networking/. Give it a try, you might
|
|
be pleasently surprised like I was.
|
|
|
|
*** DONE Deploying Traefik and Pihole on the /Swarm/ home cluster :docker:linux:arm:ansible:traefik:pihole:swarm:raspberry_pi:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2022-08-25
|
|
:EXPORT_DATE: 2022-08-25
|
|
:EXPORT_FILE_NAME: deploying-traefik-and-pihole-on-the-swarm-home-cluster
|
|
:CUSTOM_ID: deploying-traefik-and-pihole-on-the-swarm-home-cluster
|
|
:END:
|
|
|
|
In the [[#raspberry-pi-container-orchestration-and-swarm-right-at-home][previous post]], we setup a /Swarm/ cluster. That's fine and dandy but that
|
|
cluster, as far as we're concerned, is useless. Let's change that.
|
|
|
|
#+hugo: more
|
|
|
|
**** Traefik
|
|
I've talked and played with /Traefik/ previously on this blog and here we go
|
|
again, with another orchestration technology. As always, we need an ingress to
|
|
our cluster. /Traefik/ makes a great ingress that's easily configurable with ~labels~.
|
|
|
|
Let's not forget, we're working with /Swarm/ this time around. /Swarm/ stacks
|
|
look very similar to ~docker-compose~ manifests.
|
|
|
|
But, before we do that, there is a small piece of information that we need to be
|
|
aware of. For /Traefik/ to be able to route traffic to our services, both
|
|
/Traefik/ and the service need to be on the same network. Let's make this a bit
|
|
more predictable and manage that network ourselves.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title">warning</p>
|
|
#+END_EXPORT
|
|
Only ~leader~ and ~manager~ nodes will allow interaction with the /Swarm/
|
|
cluster. The ~worker~ nodes will not give you any useful information about the
|
|
cluster.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
***** Network Configuration
|
|
We started with /Ansible/ and we shall continue with /Ansible/. We begin with
|
|
creating the network.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Create a Traefik Ingress network
|
|
community.docker.docker_network:
|
|
name: traefik-ingress
|
|
driver: overlay
|
|
scope: swarm
|
|
#+end_src
|
|
|
|
***** Ingress
|
|
Once the network is in place, we can go ahead and deploy /Traefik/.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition warning">
|
|
<p class="admonition-title">warning</p>
|
|
#+END_EXPORT
|
|
This setup is not meant to be deploy in a *production* setting. *SSL*
|
|
certificates require extra configuration steps that might come in a future post.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Deploy Traefik Stack
|
|
community.docker.docker_stack:
|
|
state: present
|
|
name: Traefik
|
|
compose:
|
|
- version: '3'
|
|
services:
|
|
traefik:
|
|
image: traefik:latest
|
|
restart: unless-stopped
|
|
command:
|
|
- --entrypoints.web.address=:80
|
|
- --providers.docker=true
|
|
- --providers.docker.swarmMode=true
|
|
- --accesslog
|
|
- --log.level=INFO
|
|
- --api
|
|
- --api.insecure=true
|
|
ports:
|
|
- "80:80"
|
|
volumes:
|
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
|
networks:
|
|
- traefik-ingress
|
|
deploy:
|
|
replicas: 1
|
|
resources:
|
|
limits:
|
|
cpus: '1'
|
|
memory: 80M
|
|
reservations:
|
|
cpus: '0.5'
|
|
memory: 40M
|
|
placement:
|
|
constraints:
|
|
- node.role == manager
|
|
|
|
labels:
|
|
- traefik.protocol=http
|
|
- traefik.docker.network=traefik-ingress
|
|
- traefik.http.routers.traefik-api.rule=Host(`traefik.our-domain.com`)
|
|
- traefik.http.routers.traefik-api.service=api@internal
|
|
- traefik.http.services.taefik-api.loadbalancer.server.port=8080
|
|
|
|
networks:
|
|
traefik-ingress:
|
|
external: true
|
|
#+end_src
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
Even though these are /Ansible/ tasks, /Swarm/ stack manifests are not much
|
|
different as I'm using mostly the raw format.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Let's talk a bit about what we did.
|
|
- ~--providers.docker=true~ and ~--providers.docker.swarmMode=true~ :: We
|
|
configure /Traefik/ to enable both /docker/ and /swarm/ mode providers.
|
|
- ~--api~ and ~--api-insecure=true~ :: We enable the API which offers the UI
|
|
and we allow it to run insecure.
|
|
|
|
The rest, I believe, have been explained in the previous blog post.
|
|
|
|
If everything went well, and we configured our /DNS/ properly, we should be
|
|
welcomed by a /Traefik/ dashboard on ~traefik.our-domain.com~.
|
|
|
|
**** Pi-hole
|
|
Now I know most people install the /Pi-hole/ straight on the /Pi/. Well, I'm not
|
|
most people and I'd like to deploy it in a container. I feel it's easier all
|
|
around than installing it on the system, you'll see.
|
|
|
|
#+begin_src yaml
|
|
---
|
|
- name: Deploy PiHole Stack
|
|
community.docker.docker_stack:
|
|
state: present
|
|
name: PiHole
|
|
compose:
|
|
- version: '3'
|
|
services:
|
|
pihole:
|
|
image: pihole/pihole:latest
|
|
restart: unless-stopped
|
|
ports:
|
|
- "53:53"
|
|
- "53:53/udp"
|
|
cap_add:
|
|
- NET_ADMIN
|
|
environment:
|
|
TZ: "Europe/Vienna"
|
|
VIRTUAL_HOST: pihole.our-domain.com
|
|
VIRTUAL_PORT: 80
|
|
healthcheck:
|
|
test: ["CMD", "curl", "-f", "http://localhost:80/"]
|
|
interval: 30s
|
|
timeout: 20s
|
|
retries: 3
|
|
volumes:
|
|
- /opt/pihole/data/pihole-config:/etc/pihole
|
|
- /opt/pihole/data/pihole-dnsmasq.d:/etc/dnsmasq.d
|
|
networks:
|
|
- traefik-ingress
|
|
deploy:
|
|
replicas: 1
|
|
placement:
|
|
constraints:
|
|
- node.role == worker
|
|
labels:
|
|
- traefik.docker.network=traefik-ingress
|
|
- traefik.http.routers.pihole-http.entrypoints=web
|
|
- traefik.http.routers.pihole-http.rule=Host(`pihole.our-domain.com`)
|
|
- traefik.http.routers.pihole-http.service=pihole-http
|
|
- traefik.http.services.pihole-http.loadbalancer.server.port=80
|
|
- traefik.http.routers.pihole-http.middlewares=pihole-main
|
|
- traefik.http.middlewares.pihole-main.chain.middlewares=frame-deny,browser-xss-filter
|
|
- traefik.http.middlewares.frame-deny.headers.framedeny=true
|
|
- traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true
|
|
|
|
networks:
|
|
traefik-ingress:
|
|
external: true
|
|
#+end_src
|
|
|
|
We make sure to expose port ~53~ for *DNS* on all nodes, and configure the
|
|
proper ~labels~ to our service so that /Traefik/ can pick it up.
|
|
|
|
Once deployed and your /DNS/ is pointing properly then ~pihole.our-domain.com~
|
|
is waiting for you. This also shows us that the networking between nodes works
|
|
properly. Let's test it out.
|
|
|
|
#+begin_src shell
|
|
$ nslookup duckduckgo.com pihole.our-domain.com
|
|
Server: pihole.our-domain.com
|
|
Address: 192.168.1.100#53
|
|
|
|
Non-authoritative answer:
|
|
Name: duckduckgo.com
|
|
Address: 52.142.124.215
|
|
#+end_src
|
|
|
|
Alright, seems that our /Pi-hole/ works.
|
|
|
|
**** Conclusion
|
|
On these small Raspberry Pis, the cluster seems to be working very well. The
|
|
/Pi-hole/ has been running without any issues for a few days running my internal
|
|
/DNS/. There's a few improvements that can be done to this setup, mainly the
|
|
deployment of an /SSL/ cert. That may come in the future, time permitting. Stay
|
|
safe, until the next one !
|
|
** K3s :@k3s:
|
|
*** DONE Building k3s on a Pi :arm:kubernetes:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2020-08-09
|
|
:EXPORT_DATE: 2020-08-09
|
|
:EXPORT_FILE_NAME: building-k3s-on-a-pi
|
|
:CUSTOM_ID: building-k3s-on-a-pi
|
|
:END:
|
|
|
|
I have had a *Pi* laying around used for a simple task for a while now.
|
|
A few days ago, I was browsing the web, learning more about privacy, when I stumbled upon [[https://adguard.com/en/welcome.html][AdGuard Home]].
|
|
|
|
I have been using it as my internal DNS on top of the security and privacy layers I add to my machine.
|
|
Its benefits can be argued but it is a DNS after all and I wanted to see what else it can do for me.
|
|
Anyway, I digress. I searched to see if I could find a container for *AdGuard Home* and I did.
|
|
|
|
At this point, I started thinking about what I could do to make the [[https://www.raspberrypi.org/][Pi]] more useful.
|
|
|
|
That's when [[https://k3s.io/][k3s]] came into the picture.
|
|
#+hugo: more
|
|
|
|
**** Pre-requisites
|
|
As this is not a *Pi* tutorial, I am going to be assuming that you have a /Raspberry Pi/ with *Raspberry Pi OS* /Buster/ installed on it.
|
|
The assumption does not mean you cannot install any other OS on the Pi and run this setup.
|
|
It only means that I have tested this on /Buster/ and that your milage will vary.
|
|
|
|
**** Prepare the Pi
|
|
Now that you have /Buster/ already installed, let's go ahead and [[https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster][fix]] a small default configuration issue with it.
|
|
*K3s* uses =iptables= to route things around correctly. /Buster/ uses =nftables= by default, let's switch it to =iptables=.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ sudo iptables -F
|
|
$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
|
|
$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
|
|
$ sudo reboot
|
|
#+END_EXAMPLE
|
|
|
|
At this point, your /Pi/ should reboot. Your *OS* is configured for the next step.
|
|
|
|
**** Pre-install Configuration
|
|
After testing *k3s* a few times, I found out that by /default/ it will deploy a few extra services like [[https://docs.traefik.io/][Traefik]].
|
|
|
|
Unfortunately, just like anything the /default/ configuration is just that. It's plain and not very useful from the start. You will need to tweak it.
|
|
|
|
This step could be done either /post/ or /pre/ deploy. Figuring out the /pre-deploy/ is a bit more involving but a bit more fun as well.
|
|
|
|
The first thing you need to know is that the normal behavior of *k3s* is to deploy anything found in =/var/lib/rancher/k3s/server/manifests/=.
|
|
So a good first step is, of course, to proceed with creating that.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ mkdir -p /var/lib/rancher/k3s/server/manifests/
|
|
#+END_EXAMPLE
|
|
|
|
The other thing to know is that *k3s* can deploy /Helm Charts/.
|
|
It will create the /manifests/ it will deploy by default, before beginning the setup, in the manifest path I mentioned.
|
|
If you would like to see what it deployed and how, visit that path after *k3s* runs.
|
|
I did, and I took their configuration of *Traefik* which I was unhappy with its /defaults/.
|
|
|
|
My next step was securing the /defaults/ as much as possible and I found out that *Traefik* can do [[https://docs.traefik.io/v2.0/middlewares/basicauth/][basic authentication]].
|
|
As a starting point, that's great. Let's create the credentials.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ htpasswd -c ./auth myUser
|
|
#+END_EXAMPLE
|
|
|
|
That was easy so far. Let's turn up the notch and create the manifest for *k3s*.
|
|
|
|
Create =traefik.yaml= in =/var/lib/rancher/k3s/server/manifests/= with the following content.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
apiVersion: helm.cattle.io/v1
|
|
kind: HelmChart
|
|
metadata:
|
|
name: traefik
|
|
namespace: kube-system
|
|
spec:
|
|
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.81.0.tgz
|
|
valuesContent: |-
|
|
rbac:
|
|
enabled: true
|
|
ssl:
|
|
enabled: true
|
|
dashboard:
|
|
enabled: true
|
|
domain: traefik-ui.example.com
|
|
auth:
|
|
basic:
|
|
myUser: $ars3$4A5tdstr$trSDDa4467Tsa54sTs.
|
|
metrics:
|
|
prometheus:
|
|
enabled: false
|
|
kubernetes:
|
|
ingressEndpoint:
|
|
useDefaultPublishedService: true
|
|
image: "rancher/library-traefik"
|
|
tolerations:
|
|
- key: "CriticalAddonsOnly"
|
|
operator: "Exists"
|
|
- key: "node-role.kubernetes.io/master"
|
|
operator: "Exists"
|
|
effect: "NoSchedule"
|
|
#+END_SRC
|
|
|
|
It's a *Pi*, I don't need prometheus so I disabled it.
|
|
I also enabled the dashboard and added the credentials we created in the previous step.
|
|
|
|
Now, the /Helm Chart/ will deploy an ingress and expose the dashboard for you on the value of =domain=.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
I figured out the values to set in =valuesContent= by reading the /Helm Chart/
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
**** K3s
|
|
If everything is in place, you are ready to proceed.
|
|
You can install *k3s*, now, but before I get to that step, I will say a few things about *k3s*.
|
|
*K3s* has a smaller feature set than *k8s*, hence the smaller footprint.
|
|
Read the documentation to see if you need any of the missing features.
|
|
The second thing to mention is that *k3s* is a one binary deploy that uses *containerd*.
|
|
That's why we will use the script installation method as it adds the necessary *systemd* configuration for us.
|
|
It is a nice gesture.
|
|
|
|
Let's do that, shall we ?
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik
|
|
#+END_EXAMPLE
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title"><b>Note</b></p>
|
|
#+END_EXPORT
|
|
We need to make sure that *k3s* does not deploy its own *traefik* but ours.
|
|
Make sure to add =--no-deploy traefik= to our deployment command.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
Point =traefik.example.com= to your *Pi* =IP= in =/etc/hosts= on your machine.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
traefik.example.com 192.168.0.5
|
|
#+END_EXAMPLE
|
|
|
|
When the installation command is done, you should be able to visit [[http://traefik.example.com/][http://traefik.example.com/]]
|
|
|
|
You can get the /kubeconfig/ from the /Raspberry Pi/, you can find it in =/etc/rancher/k3s/k3s.yaml=. You will need to change the =server= *IP*.
|
|
|
|
**** Conclusion
|
|
If you've made it so far, you should have a *k3s* cluster running on a single /Raspberry Pi/.
|
|
The next steps you might want to look into is disable the /metrics/ server and use the resources for other things.
|
|
** Kubernetes :@kubernetes:
|
|
*** DONE Minikube Setup :minikube:ingress:ingress_controller:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2019-07-02
|
|
:EXPORT_DATE: 2019-02-09
|
|
:EXPORT_FILE_NAME: minikube-setup
|
|
:CUSTOM_ID: minikube-setup
|
|
:END:
|
|
|
|
If you have ever worked with /kubernetes/, you'd know that minikube out of the box does not give you what you need for a quick setup. I'm sure you can go =minikube start=, everything's up... Great... =kubectl get pods -n kube-system=... It works, let's move on...
|
|
|
|
But what if it's not let's move on to something else. We need to look at this as a local test environment in capabilities. We can learn so much from it before applying to the lab. But, as always, there are a few tweaks we need to perform to give it the magic it needs to be a real environment.
|
|
#+hugo: more
|
|
|
|
**** Prerequisites
|
|
If you are looking into /kubernetes/, I would suppose that you know your linux's ABCs and you can install and configure /minikube/ and its prerequisites prior to the beginning of this tutorial.
|
|
|
|
You can find the guide to install /minikube/ and configure it on the /minikube/ [[https://kubernetes.io/docs/setup/minikube/][webpage]].
|
|
|
|
Anyway, make sure you have /minikube/ installed, /kubectl/ and whatever driver dependencies you need to run it under that driver. In my case, I am using /kvm2/ which will be reflected in the commands given to start /minikube/.
|
|
|
|
**** Starting /minikube/
|
|
Let's start minikube.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube start --vm-driver=kvm2
|
|
Starting local Kubernetes v1.13.2 cluster...
|
|
Starting VM...
|
|
Getting VM IP address...
|
|
Moving files into cluster...
|
|
Setting up certs...
|
|
Connecting to cluster...
|
|
Setting up kubeconfig...
|
|
Stopping extra container runtimes...
|
|
Starting cluster components...
|
|
Verifying apiserver health ...
|
|
Kubectl is now configured to use the cluster.
|
|
Loading cached images from config file.
|
|
|
|
|
|
Everything looks great. Please enjoy minikube!
|
|
#+END_EXAMPLE
|
|
|
|
Great... At this point we have a cluster that's running, let's verify.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# Id Name State
|
|
--------------------------
|
|
3 minikube running
|
|
#+END_EXAMPLE
|
|
|
|
For me, I can check =virsh=. If you used /VirtualBox/ you can check that.
|
|
|
|
We can also test with =kubectl=.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl version
|
|
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
|
|
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
|
|
#+END_EXAMPLE
|
|
|
|
Now what ? Well, now we deploy a few addons that we need to deploy in production as well for a functioning /kubernetes/ cluster.
|
|
|
|
Let's check the list of add-ons available out of the box.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube addons list
|
|
- addon-manager: enabled
|
|
- dashboard: enabled
|
|
- default-storageclass: enabled
|
|
- efk: disabled
|
|
- freshpod: disabled
|
|
- gvisor: disabled
|
|
- heapster: enabled
|
|
- ingress: enabled
|
|
- kube-dns: disabled
|
|
- metrics-server: enabled
|
|
- nvidia-driver-installer: disabled
|
|
- nvidia-gpu-device-plugin: disabled
|
|
- registry: disabled
|
|
- registry-creds: disabled
|
|
- storage-provisioner: enabled
|
|
- storage-provisioner-gluster: disabled
|
|
#+END_EXAMPLE
|
|
|
|
Make sure you have /dashboard/, /heapster/, /ingress/ and /metrics-server/ *enabled*. You can enable add-ons with =kubectl addons enable=.
|
|
|
|
**** What's the problem then ?
|
|
Here's the problem that comes next. How do you access the dashboard or anything running in the cluster ? Everyone online suggests you proxy a port and you access the dashboard. Is that really how it should work ? Is that how production system do it ?
|
|
|
|
The answer is of course not. They use different types of /ingresses/ at their disposal. In this case, /minikube/ was kind enough to provide one for us, the default /kubernetes ingress controller/, It's a great option for an ingress controller that's solid enough for production use. Fine, a lot of babble. Yes sure but this babble is important. So how do we access stuff on a cluster ?
|
|
|
|
To answer that question we need to understand a few things. Yes, you can use a =NodePort= on your service and access it that way. But do you really want to manage these ports ? What's in use and what's not ? Besides, wouldn't it be better if you can use one port for all of the services ? How you may ask ?
|
|
|
|
We've been doing it for years, and by we I mean /ops/ and /devops/ people. You have to understand that the kubernetes ingress controller is simply an /nginx/ under the covers. We've always been able to configure /nginx/ to listen for a specific /hostname/ and redirect it where we want to. It shouldn't be that hard to do right ?
|
|
|
|
Well this is what an ingress controller does. It uses the default ports to route traffic from the outside according to hostname called. Let's look at our cluster and see what we need.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl get services --all-namespaces
|
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
default kubernetes ClusterIP 10.96.0.1 443/TCP 17m
|
|
kube-system default-http-backend NodePort 10.96.77.15 80:30001/TCP 17m
|
|
kube-system heapster ClusterIP 10.100.193.109 80/TCP 17m
|
|
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 17m
|
|
kube-system kubernetes-dashboard ClusterIP 10.106.156.91 80/TCP 17m
|
|
kube-system metrics-server ClusterIP 10.103.137.86 443/TCP 17m
|
|
kube-system monitoring-grafana NodePort 10.109.127.87 80:30002/TCP 17m
|
|
kube-system monitoring-influxdb ClusterIP 10.106.174.177 8083/TCP,8086/TCP 17m
|
|
#+END_EXAMPLE
|
|
|
|
In my case, you can see that I have a few things that are in =NodePort= configuration and you can access them on those ports. But the /kubernetes-dashboard/ is a =ClusterIP= and we can't get to it. So let's change that by adding an ingress to the service.
|
|
|
|
**** Ingress
|
|
An ingress is an object of kind =ingress= that configures the ingress controller of your choice.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
apiVersion: extensions/v1beta1
|
|
kind: Ingress
|
|
metadata:
|
|
name: kubernetes-dashboard
|
|
namespace: kube-system
|
|
annotations:
|
|
nginx.ingress.kubernetes.io/rewrite-target: /
|
|
spec:
|
|
rules:
|
|
- host: dashboard.kube.local
|
|
http:
|
|
paths:
|
|
- path: /
|
|
backend:
|
|
serviceName: kubernetes-dashboard
|
|
servicePort: 80
|
|
#+END_SRC
|
|
|
|
Save that to a file =kube-dashboard-ingress.yaml= or something then run.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl apply -f kube-bashboard-ingress.yaml
|
|
ingress.extensions/kubernetes-dashboard created
|
|
#+END_EXAMPLE
|
|
|
|
And now we get this.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl get ingress --all-namespaces
|
|
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
|
|
kube-system kubernetes-dashboard dashboard.kube.local 80 17s
|
|
#+END_EXAMPLE
|
|
|
|
Now all we need to know is the IP of our kubernetes cluster of /one/.
|
|
Don't worry /minikube/ makes it easy for us.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube ip
|
|
192.168.39.79
|
|
#+END_EXAMPLE
|
|
|
|
Now let's add that host to our =/etc/hosts= file.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
192.168.39.79 dashboard.kube.local
|
|
#+END_EXAMPLE
|
|
|
|
Now if you go to [[http://dashboard.kube.local]] in your browser, you will be welcomed with the dashboard. How is that so ? Well as I explained, point it to the nodes of the cluster with the proper hostname and it works.
|
|
|
|
You can deploy multiple services that can be accessed this way, you can also integrate this with a service mesh or a service discovery which could find the up and running nodes that can redirect you to point to at all times. But this is the clean way to expose services outside the cluster.
|
|
*** DONE Your First Minikube Helm Deployment :minikube:ingress:helm:prometheus:grafana:
|
|
:PROPERTIES:
|
|
:EXPORT_HUGO_LASTMOD: 2019-06-21
|
|
:EXPORT_DATE: 2019-02-10
|
|
:EXPORT_FILE_NAME: your-first-minikube-helm-deployment
|
|
:CUSTOM_ID: your-first-minikube-helm-deployment
|
|
:END:
|
|
|
|
In the last post, we have configured a basic /minikube/ cluster. In this post we will deploy a few items we will need in a cluster and maybe in the future, experiment with it a bit.
|
|
#+hugo: more
|
|
|
|
**** Prerequisite
|
|
During this post and probably during future posts, we will be using /helm/ to deploy to our /minikube/ cluster. Some offered by the helm team, others by the community and maybe our own. We need to install =helm= on our machine. It should be as easy as downloading the binary but if you can find it in your package manager go that route.
|
|
|
|
**** Deploying Tiller
|
|
Before we can start with the deployments using =helm=, we need to deploy /tiller/. It's a service that manages communications with the client and deployments.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ helm init --history-max=10
|
|
Creating ~/.helm
|
|
Creating ~/.helm/repository
|
|
Creating ~/.helm/repository/cache
|
|
Creating ~/.helm/repository/local
|
|
Creating ~/.helm/plugins
|
|
Creating ~/.helm/starters
|
|
Creating ~/.helm/cache/archive
|
|
Creating ~/.helm/repository/repositories.yaml
|
|
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
|
Adding local repo with URL: http://127.0.0.1:8879/charts
|
|
$HELM_HOME has been configured at ~/.helm.
|
|
|
|
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
|
|
|
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
|
To prevent this, run ``helm init`` with the --tiller-tls-verify flag.
|
|
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
|
#+END_EXAMPLE
|
|
/Tiller/ is deployed, give it a few minutes for the pods to come up.
|
|
|
|
**** Deploy Prometheus
|
|
We often need to monitor multiple aspects of the cluster easily. Sometimes maybe even write our applications to (let's say) publish metrics to prometheus. And I said 'let's say' because technically we offer an endpoint that a prometheus exporter will consume regularly and publish to the prometheus server. Anyway, let's deploy prometheus.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ helm install stable/prometheus-operator --name prometheus-operator --namespace kube-prometheus
|
|
NAME: prometheus-operator
|
|