openSUSE with Passwordless U2F Login

openSUSE Geeko with Yubikey

I have a Yubikey 5 NFC that I use for 2-Factor Authentication (2FA) on websites that support it and for storing my GPG keys.

I recently got a new laptop, and I quickly got tired of trying to remember a long new password. I have also gotten use to using things like Windows Hello at work, where a pin or fingerprint can be used to Log in.

Most of the articles I found about setting up U2F in Linux were using Ubuntu, and since I am using openSUSE and some files are in different places, this post documents that process. Although Universal 2nd Factor (U2F) on the Yubikey can be used to add 2FA to make your Linux laptop more secure, my focus is on a passwordless login so that I don't need to enter a long password at all to log in to my Linux account.

WARNING: An erroneous PAM configuration may lock you completely out of your systems or prevent you from gaining root privileges. Before getting started, open a terminal and su to root. Before closing the terminal, test your configuration thoroughly.

Installing the Required Software

The openSUSE repository includes a package called pam_u2f. This package adds U2F support for Pluggable Authentication Modules (PAM). PAM provides the libraries for Linux that allow configuration of authentication of users. In this case we want to authenticate using a U2F module, so we install it by opening a terminal and typing:

$ sudo zypper in pam_u2f

Associating the U2F Key With Your Account

The U2F PAM module needs to make use of an authentication file that associates the user name that will login with the Yubikey token. Open a terminal and insert your Yubikey.

$ mkdir -p ~/.config/Yubico
$ pamu2fcfg -u $(whoami) >> ~/.config/Yubico/u2f_keys

When your device begins flashing, touch the metal contact to confirm the association.

For increased security, we'll next move the u2f_keys file to an area where you'll need sudo permission to edit the file.

$ sudo mkdir -p /etc/Yubico
$ sudo mv  ~/.config/Yubico/u2f_keys /etc/Yubico/u2f_keys
$ sudo chown root.root /etc/Yubico/u2f_keys

Edit the PAM Configuration

Once the u2f_keys file is moved to a safer location the PAM file will need to be modified so that u2f PAM module can find the u2f_keys file. For these PAM configurations, openSUSE creates configuration files, which end in "pc", generated by the pam-config utility, and then symbolically links the PAM configuration files to these. Unfortunately, we can't use pam-config to configure the pam_u2f module, so we'll need to manually edit the config. To do this, we will only need to make changes to the main PAM authorization configuration called common-auth, so first we need to remove the symbolic link and then copy the configuration so that we can edit it.

$ cd /etc/pam.d
$ sudo rm common-auth
$ sudo cp common-auth-pc common-auth

Now edit the configuration file that you created. I would normally use vim to make a quick edit to a file, but I will use nano instead in case you aren't familiar with vim commands:

$ sudo nano common-auth

Scroll to the bottom or hit Alt+/, there should be three lines that aren't commented out:

auth    required        pam_env.so
auth    optional        pam_gnome_keyring.so
auth    required        pam_unix.so     try_first_pass

After the pam_gnome_keyring line, add a new line so that your file looks like:

auth    required        pam_env.so      
auth    optional        pam_gnome_keyring.so
auth    sufficient      pam_u2f.so      authfile=/etc/Yubico/u2f_keys cue
auth    required        pam_unix.so     try_first_pass

Let's discuss what this did:

  • auth adds a new definition for login authorization
  • sufficient allows a login with Yubikey only, but isn't required to login if the user's password is entered
  • pam_u2f.so is the PAM U2F module
  • authfile sets the authorization file that we created earlier
  • cue creates a prompt to remind you to touch your Yubikey

When you are done adding the configuration line, save the file by pressing Ctrl+x and then hit enter.

Test Logging In

Before you close your su terminal, make sure that logging in works using U2F. Using a new terminal, try to login:

$ su - $(whoami)
Password: 

Instead of entering your password, hit enter. You should see a prompt to touch your device:

Please touch the device.
$

If that is successful then congrats, you should now be able to restart your computer and login using your Yubikey!

Troubleshooting - Enable Debug Mode

If you are unable to login and are unsure why, you can enable debugging on the Yubico PAM module. First open a terminal, then execute:

$ sudo touch /var/log/pam_u2f.log

Edit the /etc/pam.d/common-auth file again and add debug debug_file=/var/log/pam_u2f.log to the end of the line that you added earlier. Save the file and now each login attempt will be saved in the /var/log/pam_u2f.log file.

Unlock GNOME Keyring

If you are using GNOME, even though you successfully logged in with your Yubikey, GNOME will still ask you to unlock your login keyring with your login password. This defeats the purpose of setting up your Yubikey to login in the first place. There is a project called gnome-keyring-yubikey-unlock that solves this by encrypting the keyring-name : password pair with GnuPG and save it as secret-file. Then on starting GNOME, a script will automatically run that calls GnuPG to decrypt the secret file, and pipe use the password to unlock your keyring.

To build and install it openSUSE, run the following commands:

$ sudo zypper in libgnome-keyring-devel git
$ git clone https://git.recolic.net/root/gnome-keyring-yubikey-unlock --recursive
$ cd gnome-keyring-yubikey-unlock/src
$ make
$ cd ..

Next we need to get your public key id:

$ gpg --list-keys
/home/dan/.gnupg/pubring.kbx
----------------------------
pub   rsa4096 2020-12-22 [SC]
      30EE9BFEC3FD0B37F9088DBE42239C515C9B9841
uid           [ultimate] Dan Yeaw <dan@yeaw.me>
sub   rsa4096 2021-11-10 [A]
sub   rsa4096 2021-11-10 [E]
sub   rsa4096 2021-11-10 [S]
sub   rsa4096 2020-12-22 [E]

The hexadecimal id that starts with 30EE9BF is my public gpg key id. Next we are going to create the encrypted keyring password pair. Replace YOUR_PUBLIC_GPG_KEY with your public gpg key id from the last step and replace YOUR_LOGIN_PASSWORD with the password for your user account.

$ ./create_secret_file.sh ~/.gnupg/gnome_keyring_yubikey_secret YOUR_PUBLIC_GPG_KEY
>>> Please type keyring_name and password in the following format:

keyring1:password1
keyring2:password2

login:12345678

>>> When you are done, use Ctrl-D to end.
login:YOUR_LOGIN_PASSWORD

Next we want to change the permissions of the file, so that only your user can read and write to the file:

$ chmod 600 ~/.gnupg/gnome_keyring_yubikey_secret

Finally, create an autostart entry so that the script loads when you login to GNOME:

$ nano ~/.config/autostart/net.recolic.gnome-keyring-yubikey-unlock.desktop

Add the following to the file, replacing YOUR_USER with your username:

[Desktop Entry]
Type=Application
Exec=/home/YOUR_USER/Projects/gnome-keyring-yubikey-unlock/unlock_keyrings.sh /home/YOUR_USER/.gnupg/gnome_keyring_yubikey_secret
Hidden=false
X-GNOME-Autostart-enabled=true
Name=GNOME Keyring Yubikey Unlock
Comment=Unlocks the GNOME Login Keyring without password

Hit Control+x and then enter to save the file. Restart your computer, and you should now be able to login and run openSUSE without manually entering your password.

GitHub Actions: Automate Your Python Development Workflow

At GitHub Universe 2018, GitHub launched GitHub Actions in beta. Later in August 2019, GitHub announced the expansion of GitHub Actions to include Continuous Integration / Continuous Delivery (CI/CD). At Universe 2019, GitHub announced that Actions are out of beta and generally available. I spent the last few days, while I was taking some vacation during Thanksgiving, to explore GitHub Actions for automation of Python projects.

With my involvement in the Gaphor project, we have a GUI application to maintain, as well as two libraries, a diagramming widget called Gaphas, and we more recently took over maintenance of a library that enables multidispatch and events called Generic. It is important to have an efficient programming workflow to maintain these projects, so we can spend more of our open source volunteer time focusing on implementing new features and other enjoyable parts of programming, and less time doing manual and boring project maintenance.

In this blog post, I am going to give an overview of what CI/CD is, my previous experience with other CI/CD systems, how to test and deploy Python applications and libraries using GitHub Actions, and finally highlight some other Actions that can be used to automate other parts of your Python workflow.

Overview of CI/CD

Continuous Integration (CI) is the practice of frequently integrating changes to code with the existing code repository.

Continuous Integration

Continuous Delivery / Delivery (CD) then extends CI by making sure the software checked in to the master branch is always in a state to be delivered to users, and automates the deployment process.

Continuous Delivery / Deployment

For open source projects on GitHub or GitLab, the workflow often looks like:

  1. The latest development is on the mainline branch called master.
  2. Contributors create their own copy of the project, called a fork, and then clone their fork to their local computer and setup a development environment.
  3. Contributors create a local branch on their computer for a change they want to make, add tests for their changes, and make the changes.
  4. Once all the unit tests pass locally, they commit the changes and push them to the new branch on their fork.
  5. They open a Pull Request to the original repo.
  6. The Pull Request kicks off a build on the CI system automatically, runs formatting and other lint checks, and runs all the tests.
  7. Once all the tests pass, and the maintainers of the project are good with the updates, they merge the changes back to the master branch.

Either in a fixed release cadence, or occasionally, the maintainers then add a version tag to master, and kickoff the CD system to package and release a new version to users.

My Experience with other CI/CD Systems

Since most open source projects didn't want the overhead of maintaining their own local CI server using software like Jenkins, the use of cloud-based or hosted CI services became very popular over the last 7 years. The most frequently used of these was Travis CI with Circle CI a close second. Although both of these services introduced initial support for Windows over the last year, the majority of users are running tests on Linux and macOS only. It is common for projects using Travis or Circle to use another service called AppVeyor if they need to test on Windows.

I think the popularity of Travis CI and the other similar services is based on how easy they were to get going with. You would login to the service with your GitHub account, tell the service to test one of your projects, add a YAML formatted file to your repository using one of the examples, and push to the software repository (repo) to trigger your first build. Although these services are still hugely popular, 2019 was the year that they started to lose some of their momentum. In January 2019, a company called Idera bought Travis CI. In February Travis CI then laid-off a lot of their senior engineers and technical staff.

The 800-pound gorilla entered the space in 2018, when Microsoft bought GitHub in June and then rebranded their Visual Studio Team Services ecosystem and launched Azure Pipelines as a CI service in September. Like most of the popular services, it was free for open source projects. The notable features of this service was that it launched supporting Linux, macOS, and Windows, and it allowed for 10 parallel jobs. Although the other services offer parallel builds, on some platforms they are limited for open source projects, and I would often be waiting for a server called an "agent" to be available with Travis CI. Following the lay-offs at Travis CI, I was ready to explore other services to use, and Azure Pipelines was the new hot CI system.

In March 2019, I was getting ready to launch version 1.0.0 of Gaphor after spending a couple of years helping to update it to Python 3 and PyGObject. We had been using Travis CI, and we were lacking the ability to test and package the app on all three major platforms. I used this as an opportunity to learn Azure Pipelines with the goal of being able to fill this gap we had in our workflow.

My takeaways from this experience is that Azure Pipelines is lacking much of the ease of use as compared to Travis CI, but has other huge advantages including build speed and the flexibility and power to create complex cross-platform workflows. Developing a complex workflow on any of these CI systems is challenging because the feedback you receive takes a long time to get back to you. In order to create a workflow, I normally:

  1. Create a branch of the project I am working on
  2. Develop a YAML configuration based on the documentation and examples available
  3. Push to the branch, to kickoff the CI build
  4. Realize that something didn't work as expected after 10 minutes of waiting for the build to run
  5. Go back to step 2 and repeat, over and over again

One of my other main takeaways was that the documentation was often lacking good examples of complex workflows, and was not very clear on how to use each step. This drove even more trial and error, which requires a lot of patience as you are working on a solution. After a lot of effort, I was able to complete a configuration that tested Gaphor on Linux, macOS, and Windows. I also was able to partially get the CD to work by setting up Pipelines to add the built dmg file for macOS to a draft release when I push a new version tag. A couple of weeks ago, I was also able build and upload Python Wheel and source distribution, along with the Windows binaries built in MSYS2.

Despite the challenges getting there, the result was very good! Azure Pipelines is screaming fast, about twice as fast as Travis CI was for my complex workflows (25 minutes to 12 minutes). The tight integration that allows testing on all three major platforms was also just what I was looking for.

How to Test a Python Library using GitHub Actions

With all the background out of the way, now enters GitHub Actions. Although I was very pleased with how Azure Pipelines performs, I thought it would be nice to have something that could better mix the ease of use of Travis CI with the power Azure Pipelines provides. I hadn't made use of any Actions before trying to replace both Travis and Pipelines on the three Gaphor projects that I mentioned at the beginning of the post.

I started first with the libraries, in order to give GitHub Actions a try with some of the more straightforward workflows before jumping in to converting Gaphor itself. Both Gaphas and Generic were using Travis CI. The workflow was pretty standard for a Python package:

  1. Run lint using pre-commit to run Black over the code base
  2. Use a matrix build to test the library using Python 2.7, 3.6, 3.7, and 3.8
  3. Upload coverage information

To get started with GitHub Actions on a project, go to the Actions tab on the main repo:

GitHub Actions Tab

Based on your project being made up of mostly Python, GitHub will suggest three different workflows that you can use as templates to create your own:

  1. Python application - test on a single Python version
  2. Python package - test on multiple Python versions
  3. Publish Python Package - publish a package to PyPI using Twine

Below is the workflow I had in mind:

Library Workflow

I want to start with a lint job that is run, and once that has successfully completed, I want to start parallel jobs using the multiple versions of Python that my library supports.

For these libraries, the 2nd workflow was the closest for what I was looking for, since I wanted to test on multiple versions of Python. I selected the Set up this workflow option. GitHub then creates a new draft YAML file based on the template that you selected, and places it in the .github/workflows directory in your repo. At the top of the screen you can also change the name of the YAML file from pythonpackage.yml to any filename you choose. I called mine build.yml, since calling this type of workflow a build is the nomenclature I am familiar with.

As a side note, the online editor that GitHub has implemented for creating Actions is quite good. It includes full autocomplete (toggled with Ctrl+Space), and it actively highlights errors in your YAML file to ensure the correct syntax for each workflow. These type of error checks are priceless due to the long feedback loop, and I actually recommend using the online editor at this point over what VSCode or Pycharm provide.

Execute on Events

The top of each workflow file are two keywords: name and on. The name sets what will be displayed in the Actions tab for the workflow you are creating. If you don't define a name, then the name of the YAML file will be shown as the Action is running. The on keyword defines what will cause the workflow to be started. The template uses a value of push, which means that the workflow will be kicked off when you push to any branch in the repo. Here is an example of how I set these settings for my libraries:

name: Build
on:
  pull_request:
  push:
    branches: master

Instead of running this workflow on any push event, I wanted a build to happen during two conditions:

  1. Any Pull Request
  2. Any push to the master branch

You can see how that was configured above. Being able to start a workflow on any type of event in GitHub is extremely powerful, and it one of the advantages of the tight integration that GitHub Actions has.

Lint Job

The next section of the YAML file is called jobs, this is where each main block of the workflow will be defined as a job. The jobs will then be further broken down in to steps, and multiple commands can be executed in each step. Each job that you define is given a name. In the template, the job is named build, but there isn't any special significance of this name. They also are running a lint step for each version of Python being tested against. I decided that I wanted to run lint once as a separate job, and then once that is complete, all the testing can be kicked off in parallel.

In order to add lint as a separate job, I created a new job called lint nested within the jobs keyword. Below is an example of my lint job:

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - name: Setup Python
        uses: actions/setup-python@v1
        with:
          python-version: '3.x'
      - name: Install Dependencies
        run: |
          pip install pre-commit
          pre-commit install-hooks
      - name: Lint with pre-commit
        run: pre-commit run --all-files

Next comes the runs-on keyword which defines which platform GitHub Actions will run this job on, and in this case I am running on linting on the latest available version of Ubuntu. The steps keyword is where most of the workflow content will be, since it defines each step that will be taken as it is run. Each step optionally gets a name, and then either defines an Action to use, or a command to run.

Let's start with the Actions first, since they are the first two steps in my lint job. The keyword for an Action is uses, and the value is the action repo name and the version. I think of Actions as a library, a reusable step that I can use in my CI/CD pipeline without having to reinvent the wheel. GitHub developed these first two Actions that I am making use of, but you will see later that you can make use of any Actions posted by other users, and even create your own using the Actions SDK and some TypeScript. I am now convinced that this is the "secret sauce" of GitHub Actions, and will be what makes this service truly special. I will discuss more about this later.

The first two Actions I am using clones a copy of the code I am testing from my repo, and sets up Python. Actions often use the with keyword for the configuration options, and in this case I am telling the setup-python action to use a newer version from Python 3.

      - uses: actions/checkout@v1
      - name: Setup Python
        uses: actions/setup-python@v1
        with:
          python-version: '3.x'

The last two steps of the linting job are using the run keyword. Here I am defining commands to execute that aren't covered by an Action. As I mentioned earlier, I am using pre-commit to run Black over the project and check the code formatting is correct. I have this broken up in to two steps:

  1. Install Dependencies - installs pre-commit, and the pre-commit hook environments
  2. Lint with pre-commit - runs Black against all the files in the repo

In the Install Dependencies step, I am also using the pipe operator, "|", which signifies that I am giving multiple commands, and I am separating each one on a new line. We now should have a complete lint job for a Python library, if you haven't already, now would be a good time to commit and push your changes to a branch, and check the lint job passes for your repo.

Test Job

For the test job, I created another job called test, and it also uses the ubuntu-latest platform for the job. I did use one new keyword here called needs. This defines that this job should only be started once the lint job has finished successfully. If I didn't include this, then the lint job and all the other test jobs would all be started in parallel.

  test:
    needs: lint
    runs-on: ubuntu-latest

Next up I used another new keyword called strategy. A strategy creates a build matrix for your jobs. A build matrix is a set of different configurations of the virtual environment used for the job. For example, you can run a job against multiple operating systems, tool version, or in this case against different versions of Python. This prevents repetitiveness because otherwise you would need to copy and paste the same steps over and over again for different versions of Python. Finally, the template we are using also had a max-parallel keyword which limits the number of parallel jobs that can run simultaneously. I am only using four versions of Python, and I don't have any reason to limit the number of parallel jobs, so I removed this line for my YAML file.

    strategy:
      matrix:
        python-version: [2.7, 3.6, 3.7, 3.8]

Now on to the steps of the job. My first two steps, checkout the sources and setup Python, are the same two steps as I had above in the lint job. There is one difference, and that is that I am using the ${{ matrix.python-version }} syntax in the setup Python step. I use the {{ }} syntax to define an expression. I am using a special kind of expression called a context, which is a way to access information about a workflow run, the virtual environment, jobs, steps, and in this case the Python version information from the matrix parameters that I configured earlier. Finally, I use the $ symbol in front of the context expression to tell Actions to expand the expression in to its value. If version 3.8 of Python is currently running from the matrix, then ${{ matrix.python-version }} is replaced by 3.8.

    steps:
      - uses: actions/checkout@v1
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v1
        with:
          python-version: ${{ matrix.python-version }}

Since I am testing a GTK diagramming library, I need to also install some Ubuntu dependencies. I use the > symbol as YAML syntax to ignore the newlines in my run value, this allows me to execute a really long command while keeping my standard line length in my .yml file.

      - name: Install Ubuntu Dependencies
        run: >
          sudo apt-get update -q && sudo apt-get install
          --no-install-recommends -y xvfb python3-dev python3-gi
          python3-gi-cairo gir1.2-gtk-3.0 libgirepository1.0-dev libcairo2-dev

For my projects, I love using Poetry for managing my Python dependencies. See my other article on Python Packaging with Poetry and Briefcase for more information on how to make use of Poetry for your projects. I am using a custom Action that Daniel Schep created that installs Poetry. Although installing Poetry manually is pretty straightforward, I really like being able to make use of these building blocks that others have created. Although you should always use a Python virtual environment while you are working on a local development environment, they aren't really needed since the environment created for CI/CD is already isolated and won't be reused. This would be a nice improvement to the install-poetry-action, so that the creation of virtualenvs are turned off by default.

      - name: Install Poetry
        uses: dschep/install-poetry-action@v1.2
        with:
          version: 1.0.0b3
      - name: Turn off Virtualenvs
        run: poetry config virtualenvs.create false

Next we have Poetry install the dependencies using the poetry.lock file using the poetry install command. Then we are to the key step of the job, which is to run all the tests using Pytest. I preface the pytest command with xvfb-run because this is a GUI library, and many of the tests would fail because there is no display server, like X or Wayland, running on the CI runner. The X virtual framebuffer (Xvfb) display server is used to perform all the graphical operations in memory without showing any screen output.

      - name: Install Python Dependencies
        run: poetry install
      - name: Test with Pytest
        run: xvfb-run pytest

The final step of the test phase is to upload the code coverage information. We are using Code Climate for analyzing coverage, because it also integrates a nice maintainability score based on things like code smells and duplication it detects. I find this to be a good tool to help us focus our refactoring and other maintenance efforts. Coveralls and Codecov are good options that I have used as well. In order for the code coverage information to be recorded while Pytest is running, I am using the pytest-cov Pytest plugin.

      - name: Code Climate Coverage Action
        uses: paambaati/codeclimate-action@v2.3.0
        env:
          CC_TEST_REPORTER_ID: 195e9f83022747c8eefa3ec9510dd730081ef111acd99c98ea0efed7f632ff8a
        with:
          coverageCommand: coverage xml

CD Workflow - Upload to PyPI

I am using a second workflow for my app, and this workflow would actually be more in place for a library, so I'll cover it here. The Python Package Index (PyPI) is normally how we share libraries across Python projects, and it is where they are installed from when you run pip install. Once I am ready to release a new version of my library, I want the CD pipeline to upload it to PyPI automatically.

If you recall from earlier, the third GitHub Action Python workflow template was called Publish Python Package. This template is close to what I needed for my use case, except I am using Poetry to build and upload instead of using setup.py to build and Twine to upload. I also used a slightly different event trigger.

on:
  release:
    types: published

This sets my workflow to execute when I fully publish the GitHub release. The Publish Python Package template used the event created instead. However, it makes more sense to me to publish the new version, and then upload it to PyPI, instead of uploading to PyPI and then publishing it. Once a version is uploaded to PyPI it can't be reuploaded, and new version has to be created to upload again. In other words, doing the most permanent step last is my preference.

The rest of the workflow, until we get to the last step, should look very similar to the test workflow:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v1
    - name: Set up Python
      uses: actions/setup-python@v1
      with:
        python-version: '3.x'
    - name: Install Poetry
      uses: dschep/install-poetry-action@v1.2
      with:
        version: 1.0.0b3
    - name: Install Dependencies
      run: poetry install
    - name: Build and publish
      run: |
        poetry build
        poetry publish -u ${{ secrets.PYPI_USERNAME }} -p ${{ secrets.PYPI_PASSWORD }}

The final step in the workflow uses the poetry publish command to upload the Wheel and sdist to PyPI. I defined the secrets.PYPI_USERNAME and secrets.PYPI_PASSWORD context expressions by going to the repository settings, then selecting Secrets, and defining two new encrypted environmental variables that are only exposed to this workflow. If a contributor created a Pull Request from a fork of this repo, the secrets would not be passed to any of workflows started from the Pull Request. These secrets, passed via the -u and -p options of the publish command, are used to authenticate with the PyPI servers.

At this point, we are done with our configuration to test and release a library. Commit and push your changes to your branch, and ensure all the steps pass successfully. This is what the output will look like on the Actions tab in GitHub:

GitHub Actions Output

I have posted the final version of my complete GitHub Actions workflows for a Python library on the Gaphas repo.

How to Test and Deploy a Python Application using GitHub Actions

My use case for testing a cross-platform Python Application is slightly different from the previous one we looked at for a library. For the library, it was really important we tested on all the supported versions of Python. For an application, I package the application for the platform it is running on with the version of Python that I want the app to use, normally the latest stable release of Python. So instead of testing with multiple versions of Python, it becomes much more important to ensure that the tests pass on all the platforms that the application will run on, and then package and deploy the app for each platform.

Below are the two pipelines I would like to create, one for CI and one for CD. Although you could combine these in to a single pipeline, I like that GitHub Actions allows so much flexibility in being able to define any GitHub event to start a workflow. This tight integration is definitely a huge bonus here, and it allows you to make each workflow a little more atomic and understandable. I named my two workflows build.yml for the CI portion, and release.yml for the CD portion.

App Workflow

Caching Python Dependencies

Although the lint phase is the same between a library and an application, I am going to add in one more optional cache step that I didn't include earlier for simplification:

      - name: Use Python Dependency Cache
        uses: actions/cache@v1.0.3
        with:
          path: ~/.cache/pip
          key: ${{ runner.os }}-pip-${{ hashFiles('**/poetry.lock') }}
          restore-keys: ${{ runner.os }}-pip-

It is a good practice to use a cache to store information that doesn't often change in your builds, like Python dependencies. It can help speed up the build process and lessen the load on the PyPI servers. While setting this up, I also learned from the Travis CI documentation that you should not cache large files that are quick to install, but are slow to download like Ubuntu packages and docker images. These files take as long to download from the cache as they do from the original source. This explains why the cache action doesn't have any examples on caching these types of files.

The caches work by checking if a cached archive exists at the beginning of the workflow. If it exists, it downloads it and unpacks it to the path location. At the end of the workflow, the action checks if the cache previously existed, if not, this is called a cache miss, and it creates a new archive and uploads it to remote storage.

A few configurations to notice, the path is operating system dependent because pip stores its cache in different locations. My configuration above is for Ubuntu, but you would need to use ~\AppData\Local\pip\Cache for Windows and ~/Library/Caches/pip for macOS. The key is used to determine if the correct cache exists for restoring and saving to. Since I am using Poetry for dependency management, I am taking the hash of the poetry.lock file and adding it to end of a key which contains the context expression for the operating system that the job is running on, runner.os, and pip. This will look like Windows-pip-45f8427e5cd3738684a3ca8d009c0ef6de81aa1226afbe5be9216ba645c66e8a, where the end is a long hash. This way if my project dependencies change, my poetry.lock will be updated, and a new cache will be created instead of restoring from the old cache. If you aren't using Poetry, you could also use your requirements.txt or Pipfile.lock for the same purpose.

As we mentioned earlier, if the key doesn't match an existing cache, it's called a cache miss. The final configuration option called restore-keys is optional, and it provides an ordered list of keys to use for restoring the cache. It does this by sequentially searching for any caches that partially match in the restore-keys list. If a key partially matches, the action downloads and unpacks the archive for use, until the new cache is uploaded at the end of the workflow.

Test Job

Ideally, it would be great to use a build matrix to test across platforms. This way you could have similar build steps for each platform without repeating yourself. This would look something like this:

runs-on: ${{ matrix.os }}
strategy:
    matrix:
        os: [ubuntu-latest, windows-latest, macOS-latest]
steps:
    - name: Install Ubuntu Dependencies
      if: matrix.os == 'ubuntu-latest'
      run: >
        sudo apt-get update -q && sudo apt-get install
        --no-install-recommends -y xvfb python3-dev python3-gi
        python3-gi-cairo gir1.2-gtk-3.0 libgirepository1.0-dev libcairo2-dev
    - name: Install Brew Dependencies
      if: matrix.os == 'macOS-latest'
      run: brew install gobject-introspection gtk+3 adwaita-icon-theme

Notice the if keyword tests which operating system is currently being used in order to modify the commands for each platform. As I mentioned earlier, the GTK app I am working on, requires MSYS2 in order to test and package it for Windows. Since MSYS2 is a niche platform, most of the steps are unique and require manually setting paths and executing shell scripts. At some point maybe we can get some of these unique parts better wrapped in an action, so that when we abstract up to the steps, they can be more common across platforms. Right now, using a matrix for each operating system in my case wasn't easier than just creating three separate jobs, one for each platform.

If you are interested in a more complex matrix setup, Jeff Triplett posted his configuration for running five different Django versions against five different Python versions.

The implementation of the three test jobs is similar to the library test job that we looked at earlier.

test-linux:
  needs: lint
  runs-on: ubuntu-latest
...
test-macos:
  needs: lint
  runs-on: macOS-latest
...
test-windows:
  needs:lint
  runs-on: windows-latest

The other steps to install the dependencies, setup caching, and test with Pytest were identical.

CD Workflow - Release the App Installers

Now that we have gone through the CI workflow for a Python application, on to the CD portion. This workflow is using different event triggers:

name: Release

on:
  release:
    types: [created, edited]

GitHub has a Release tab that is built in to each repo. The deployment workflow here is started if I create or modify a release. You can define multiple events that will start the workflow by adding them as a comma separated list. When I want to release a new version of Gaphor:

  1. I update the version number in the pyproject.toml, commit the change, add a version tag, and finally push the commit and the tag.
  2. Once the tests pass, I edit a previously drafted release to point the tag to the tag of the release.
  3. The release workflow automatically builds and uploads the Python Wheel and sdist, the macOS dmg, and the Windows installer.
  4. Once I am ready, I click on the GitHub option to Publish release.

In order to achieve this workflow, first we create a job for Windows and macOS:

upload-windows:
    runs-on: windows-latest
...
upload-macos:
    runs-on: macOS-latest
...

The next steps to checkout the source, setup Python, install dependencies, install poetry, turn off virtualenvs, use the cache, and have poetry install the Python dependencies are the exact same as the application Test Job above.

Next we build the wheel and sdist, which is a single command when using Poetry:

      - name: Build Wheel and sdist
        run: poetry build

Our packaging for Windows is using custom shell scripts that run PyInstaller to package up the app, libraries, and Python, and makensis to create a Windows installer. We are also using a custom shell script to package the app for macOS. Once I execute the scripts to package the app, I then upload the release assets to GitHub:

      - name: Upload Assets
        uses: AButler/upload-release-assets@v2.0
        with:
          files: 'macos-dmg/*dmg;dist/*;win-installer/*.exe'
          repo-token: ${{ secrets.GITHUB_TOKEN }}

Here I am using Andrew Butler's upload-release-assets action. GitHub also has an action to perform this called upload-release-asset, but at the time of writing this, it didn't support uploading multiple files using wildcard characters, called glob patterns. secrets.GITHUB_TOKEN is another context expression to get the access token to allow Actions permissions to access the project repository, in this case to upload the release assets to a drafted release.

The final version of my complete GitHub Actions workflows for the cross-platform app are posted on the Gaphor repo.

Future Improvements to My Workflow

I think there is still some opportunity to simplify the workflows that I have created through updates to existing actions or creating new actions. As I mentioned earlier, it would be nice to have things at a maturity level so that no custom environment variable, paths, or shell scripts need to be run. Instead, we would be building workflows with actions as building blocks. I wasn't expecting this before I started working with GitHub Actions, but I am sold that this would be immensely powerful.

Since GitHub recently released CI/CD for Actions, many of the GitHub provided actions could use a little polish still. Most of the things that I thought of for improvements, already had been recognized by others with Issues opened for Feature requests. If we give it a little time, I am sure these will be improved soon.

I also said that one of my goals was to release to the three major platforms, but if you were paying attention in the last section, I only mentioned Windows and macOS. We are currently packaging our app using Flatpak for Linux and it is distributed through FlatHub. FlatHub does have an automatic build system, but it requires manifest files stored in a special separate FlatHub repo for the app. I also contributed to the Flatpak Builder Tools in order to automatically generate the needed manifest from the poetry.lock file. This works good, but it would be nice in the future to have the CD workflow for my app, kickoff updates to the FlatHub repo.

Bonus - Other Great Actions

Debugging with tmate - tmate is a terminal sharing app built on top of tmux. This great action allows you to pause a workflow in the middle of executing the steps, and then ssh in to the host runner and debug your configuration. I was getting a Python segmentation fault while running my tests, and this action proved to be extremely useful.

Release Drafter - In my app CD workflow, I showed that I am executing it when I create or edit a release. The release drafter action drafts my next release with release notes based on the Pull Requests that are merged. I then only have to edit the release to add the tag I want to release with, and all of my release assets automatically get uploaded. The PR Labeler action goes along with this well to label your Pull Requests based on branch name patterns like feature/*.

How to Rock Python Packaging with Poetry and Briefcase

NOTE: Briefcase now automatically creates a new project with a pyproject.toml file by running briefcase new. I would recommend following the BeeWare Tutorial to setup a new project if you want to use Briefcase for packaging it.

As part of modernizing Gaphas, the diagramming widget for Python, I took another look at what the best practices are for packaging and releasing a new version of a Python library or application. There are new configuration formats and tools to make packaging and distributing your Python code much easier.

A Short Background on Packaging

There are two main use cases for packaging:

  1. Packaging a Library - software that other programs will make use of.
  2. Packaging an Application - software that a user will make use of.

This may not be a completely accurate definition because software does not always fit cleanly in to one of these bins, but these use cases will help to keep focus on what exactly we are trying achieve with the packaging.

The Library

The goal for packaging a library is to place it on the Python Packaging Index (PyPI), so other projects can pip install it. In order to distribute a library, the standard format is the Wheel. It allows for providing a built

distribution of files and metadata so that pip only needs to extract files out of the distribution and move them to the correct location on the target system for the package to be installed. In other words, nothing needs to be built and re-compiled.

Previously if you wanted to achieve this, it was common to have four configuration files:

  1. setup.py - The setup script for building, distributing and installing modules using the Distutils.
  2. requirements.txt - Allow easy install of requirements using pip install -r
  3. setup.cfg - The setup configuration file
  4. MANIFEST.in - The manifest template, directs sdist how to generate a manifest

The Application

The goal for packaging an application is get it in the formats where you can distribute it on the different platforms for easy installation by your users. For Windows this is often an exe or msi. For macOS this is an app. For Linux this is a deb, flatpak, appimage, or snap. There is a whole host of tools to do this like: py2exe, py2app, cx_Freeze, PyInstaller, and rumps.

pyproject.toml

On the packaging front, in May of 2016, PEP 518 was created. The PEP does a good job of describing all of the shortcoming of the setup script method to specify build requirements. The PEP also specified a new configuration format call pyproject.toml. If you aren't familiar with TOML, it is human-usable and is more simple than YAML.

The pyproject.toml replaced those four configuration files above using two main sections:

  1. [build-system] - The build-system table contains the minimum requirements for the build system to execute.
  2. [tool] - The tool table is where different tools can have users specify configuration data.

The Tools

Making use of this new configuration format, a tool called flit has been around since 2015 as a simple way to put Python Libraries on PyPI.

In 2017, Pipenv was created to solve pain points about managing virtualenvs and dependencies for Python Applications by using a new Pipfile to manage dependencies. The other major enhancement was the use of a lock file. While a Wheel is the important output for a Library, for an Application, the lock file becomes the important thing created for the project. The lock file contains the exact version of every dependency so that it can be repeatably rebuilt.

In 2018, a new project called Poetry combined some of the ideas from flit and Pipenv to create a new tool that aims to further simplify and improve packaging. Like flit, Poetry makes use of the pyproject.toml to manage configuration all in one place. Like Pipenv, Poetry uses a lock file (poetry.lock) and will automatically create a virtualenv if one does not already exist. It also has other advantages like exhaustive dependency resolution that we will explore more thoroughly below.

For Application distribution, I am going to focus on a single tool called Briefcase which along with the other set of BeeWare tools and libraries allows for you to distribute your program as a native application to Windows, Linux, macOS, iOS, Android, and the web.

Tutorial

With the background information out of the way, lets work through how you can create a new Python project from scratch, and then package and distribute it.

Initial Tool Installation

To do that, I am going to introduce one more tool (the last one I promise!) called cookiecutter. Cookiecutter provides Python project templates, so that you can quickly get up to speed creating a project that can be packaged and distributed without creating a bunch of files and boilerplate manually.

To install cookiecutter, depending on your setup and operating system, from a virtualenv you can run:

$ pip install cookiecutter

Next we are going to install Poetry. The recommended way is to run:

$ curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python

TestPyPI Account Sign-Up

As part of this tutorial we will be publishing packages. If you don't already have an account, please register for an account on TestPyPI. TestPyPI allows you to try distribution tools and processes without affecting the real PyPI.

Create Your Project

To create the Python project, we are going to use the Briefcase template, so run cookiecutter on this template:

$ cookiecutter https://github.com/pybee/briefcase-template

Cookiecutter will ask you for information about the project like the name, description, and software licence. Once this is finished, add any additional code to your project, or just keep it as is for this demo.

Change your directory to the app name you gave (I called mine dantestapp), and initialize git:

$ cd dantestapp
$ git init
$ git add .

Create a pyproject.toml Configuration

Poetry comes equipped to create a pyproject.toml file for your project, which makes it easy to add it to an existing or new project. To initiliaze the configuration run:

$ poetry init

The command guides you through creating your pyproject.toml config. It automatically pulls in the configuration values from the briefcase-template that we created earlier so using the default values by hitting enter after the first six questions will be fine. This is what it provided for an output:

Package name [dantestapp]: 
Version [0.1.0]: 
Description []: 
Author [Dan Yeaw <dan@yeaw.me>, n to skip]: 
License []: MIT
Compatible Python versions [^3.7]: 
Define Dependencies

The configuration generator then asks for you to define your dependencies:

Would you like to define your dependencies (require) interactively? (yes/no) [yes]

Hit enter for yes.

For the next prompt Search for package: enter in briefcase. We are setting briefcase as a dependency for our project to run.

Enter package # to add, or the complete package name if it is not listed: 
 [0] briefcase
 [1] django-briefcase

Type 0 to select the first option. and hit enter to select the latest version. You now need to repeat this process to also add Toga as a dependency. Toga is the native cross-platform GUI toolkit. Once you are done, hit enter again to complete searching for other dependencies.

Define Development Dependencies

At the next prompt the config generator is now asking us to define our development dependencies:

Would you like to define your dev dependencies (require-dev) interactively (yes/no) [yes]

Hit enter to select the default value which is yes.

We are going to make pytest a development dependency for the project.

At the prompt Search for package: enter in pytest.

Found 100 packages matching pytest

Enter package # to add, or the complete package name if it is not listed: 
 [ 0] pytest

You will get a long list of pytest packages. Type 0 to select the first option. and hit enter to select the latest version. Then hit enter again to complete searching for other development dependencies.

Complete the Configuration

The final step of the configuration generator summaries the configuration that it created. Notice that first three sections are tool tables for Poetry, and the final one is the build-system table.

[tool.poetry]
name = "dantestapp"
version = "0.1.0"
description = ""
authors = ["Dan Yeaw <dan@yeaw.me>"]
license = "MIT"

[tool.poetry.dependencies]
python = "^3.7"
briefcase = "^0.2.8"
toga = "^0.2.15"

[tool.poetry.dev-dependencies]
pytest = "^4.0"

[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"

The dependencies use a "caret requirement", like python = "^3.7". This makes use of semantic versioning. So in this example if Python 3.8 is released, then it will automatically update to this version. But, it won't update to 4.0 automatically, since that is a major version change. If we put in our configuration "^3.7.2", then it would automatically update to 3.7.3 which it is released, but not 3.8, since that is a new minor version.

There are also "tilde requirements" that are more restrictive. So if you enter `python = "~3.7" it will only allow update to the next patch level, like from 3.7.2 to 3.7.3. The combination of caret and tilde requirements allows you to get updates to your dependencies when they are released, but puts you in control to ensure that incompatible changes won't break your app. Nice!

The final prompt asks: Do you confim generation? (yes/no) [yes]. Go ahead and hit enter to confirm. Congrats, you have generated a pyproject.toml configuration!

Install Dependencies

OK, the hard work is over, we have created our project and finished the configuration. Now it is time to see how Poetry and Briefcase really shines.

To install the dependencies that you defined in the pyproject.toml, just run:

$ poetry install

Poetry includes an exhaustive dependency resolver, so it will now resolve all of the dependencies it needs to install Briefcase, Toga, and pytest. It will also create a poetry.lock file which ensures that anyone using your program would get the exact same set of dependencies that you used and tested with.

Notice that we also did not create or specify a virtual environment. Poetry automatically creates one prior to installing packages, if one isn't already activated. If you would like to see which packages are installed and which virtual environment Poetry is using you can run:

$ poetry show -v
or
$ poetry config --list

Bundle and Run your Application for Platform Distribution

For a Python Application, you want to bundle the application and all of its dependencies into a single package so that it can easily be installed on a users platform without the user manually install Python and other modules.

Briefcase allows you to package and run your app using your platform:

(Windows) $ poetry run python setup.py windows -s
(macOS)   $ poetry run python setup.py macos -s
(Linux)   $ poetry run python setup.py linux -s

Your app will launch, will just be a blank window at this point.

Also notice that it creates a folder with the platform name that you used above. Inside this folder, Briefcase has packaged your app for distribution on your platform. Briefcase also has distribution options for android, ios, and django.

Build your Library for Distribution on PyPI

$ poetry build

Building dantestapp (0.1.0)
 - Building sdist
 - Built dantestapp-0.1.0.tar.gz

 - Building wheel
 - Built dantestapp-0.1.0-py3-none-any.whl

The source distribution (sdist) and wheel are now in a new dist folder.

Publish your Library to PyPI

First we are going to add the TestPyPI repository to Poetry, so that it knows where to publish to. The default location is to the real PyPI.

$ poetry config repositories.test-pypi https://test.pypi.org/legacy/

Now simply run:

$ poetry publish -r test-pypi

The -r argument tells Poetry to use the repository that we configured. Poetry then will ask for your username and password. Congrats! Your package is now available to be viewed at https://test.pypi.org/project/your-project-name/ and can be pip installed with pip install -i https://test.pypi.org/simple/ your-project-name.

5 Steps to Build Python Native GUI Widgets for BeeWare

Part of my work at Ford Motor Company is to use Model-Based Systems Engineering through languages like SysML to help design safety in to complex automated and electrified technologies. In my free time I took over maintaining a UML tool called Gaphor with the aim of eventually turning it in to a simple SysML tool for beginners. I'm sure I'll be writing about this much more in the future.

Eventually I got really frustrated with the current set of GUI toolkits that are available for Python. I want the ability to write an app once and have it look and feel great on all of my devices, but instead I was dealing with toolkits that are wrapped or introspected around C or C++ libraries, or visually look like a blast from past. They made me feel like I was going against the grain of Python instead of writing great Pythonic code.

If you haven't heard of BeeWare yet, it is a set of software libraries for cross-platform native app development from a single Python codebase and tools to simplify app deployment. When I say cross-platform and native, I mean truly that. The project aims to build, deploy, and run apps for Windows, Linux, macOS, Android, iPhone, and the web. It is native because it is actually that platform's native GUI widgets, not a theme, icon pack, or webpage wrapper.

A little over a year ago, I started to contribute to the BeeWare project. I needed a canvas drawing widget for the app I am working on, I saw that this was not supported by BeeWare, so I went ahead and created it. Based on my experience, this blog post details how I would create a new widget from scratch, now that I have done it before, with the hope that it helps you implement your own widget as well.

If you are new to BeeWare, I recommend to start out with the Briefcase and Toga Tutorials, and then the First-time Contributor's Guide.

BeeWare Logo with Brutus the Bee and text

The current status of the BeeWare project, at the time of writing this, is that it is a solid proof of concept. Creating a simple app on macOS, Linux, or iOS is definitely possible. In fact there is an app called Travel Tips on Apple's App Store that was created by Russell Keith-Magee as a demonstration. Support for some of the other platforms like Windows and Android is lagging behind some, so except some very rough edges.

This alpha status may not be so exciting for you if you are just trying to build an app, but I think it is very exciting for those that want to contribute to an open source project. Although there are many ways to get involved, users keep asking how they can build a GUI widget that isn't yet supported. I think this is a great way to make a significant contribution.

A GUI widget forms the controls and logic that a user interacts with when using a GUI. The BeeWare project uses a GUI widget toolkit called Toga, and below is a view of what some of the widgets look like in Linux.

Example of Toga Widgets in a demo app

There are button, table, tree, and icon widgets in the example. Since I contributed a canvas drawing widget, I will be using that for the example of how you could contribute your own widget to the project.

There are three internal layers that make up every widget:

  1. The Interface layer
  2. The Implementation layer
  3. The Native layer

Toga Blackbox

As the input to Toga, the Interface layer provides the public API for the GUI application that you are building. This is the code you will type to build your app using Toga.

As the output of Toga, the Native layer connects the Toga_impl's to the Native Platform. For C language based platforms, Toga directly calls the native widgets. For example with Gtk+ on Linux, the Toga_gtk directly calls the Gtk+ widgets through PyGObject. For other platforms, more intermediate work may be required through a bridge or transpiler:

  • macOS and iOS, the Rubicon-ObjC project provides a bridge between Objective-C and Python.
  • Web, Batavia provides a javascript implementation of the Python virtual machine.
  • Android, VOC is a transpiler that converts Python in to Java bytecode.

Toga Whitebox

The Interface layer calls public methods that are in the Toga_core portion of the project and this is where this Interface layer API is defined. Toga_core also provides any abstract functionality that is independent of the platform that Toga is running on, like setting up and running the app itself.

The Implementation layer connects Toga_core to the Toga_impl component.

A couple of other terms you should know about are impl and interface. 1. From Toga_core, self.impl is used to go across the interface layer to Toga_impl. 2. From Toga_impl, self.interface is used to go across the interface layer back to Toga_core.

More Terms

Toga uses the Factory Method design pattern in order to improve testability. This pattern creates objects using a factory method instead of directly calling a constructor. In Toga, this factory method is in Toga_core and it is used to instantiate a platform backend as the Toga_impl like Toga_ios, Toga_cocoa or Toga_gtk. The factory method automatically selects the correct backend based on the sys.platform of the platform it is running on.

Factory Method

Toga_dummy is also a type of Toga_impl backend, and it is used for smoke testing without a specific platform to find simple failures. When tests are initialized, Toga_dummy is passed in as the factory. This allows the tests and the creation of objects to be separated which improves maintainability and makes the test code easier to read.

I know there is a lot there, but understanding the software architecture of Toga together with the surrounding projects and interfaces will be key to implementing your own widget. With that background information out of the way, lets not delay any further, and jump in to building a widget.

Step 0

Pick your development platform

  • Normally pick the platform that you are most familiar with
  • macOS and Gtk+ are the most developed :thumbsup:
  • Is this a mobile only widget (camera, GPS, etc)?

This seems somewhat obvious, since the platform you select will most likely be based on the laptop or other device you are using right now. But do consider this. Most of my experience developing widgets are on Gtk+ and Cocoa so this is where I am coming from. Implementing widgets on other platforms is definitely needed as well, but it may be an additional challenge due to those platforms not as well developed with Toga yet. These other platforms may be more challenging, but they are also the areas where the BeeWare project needs the most help, so if you have some experience with them or feel especially brave, definitely go for it.

Step 1

Research your widget

  • Abstraction requires knowledge of specific examples
  • Create use cases or user stories
  • Get feedback

Since Toga is an abstraction of native GUI toolkits, understanding the APIs for these platforms is extremely important in order to develop a well abstracted API for Toga. In other words, these native platforms provide the inspiration and constraints on implementing your own widget.

As an example, of how you would conduct this research, this is how you would draw a rectangle on a Canvas on different platforms:

  • Tkinter
canvas = tk.Canvas()
canvas.create_rectangle(10, 10, 100, 100, fill="red")
canvas.pack()
  • wxpython
wx.Panel.Bind(wx.EVT_PAINT, OnPaint)
def OnPaint(self, evt):
    dc = wx.PaintDC()
    dc.SetBrush(wx.Brush(wx.Colour(200, 0, 0)))
    dc.DrawRectangle(10, 10, 100, 100)
  • HTML canvas
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
ctx.fillStyle = "rgb(200, 0, 0)";
ctx.fillRect(10, 10, 100, 100);
  • Gtk+
drawingarea = Gtk.DrawingArea()
drawingarea.connect("draw", draw)
def draw(da, ctx):
    ctx.set_source_rgb(200, 0, 0)
    ctx.rectangle(10, 10, 100, 100)
    ctx.fill()

The other thing to understand is how a user will use this widget to build their own app. I like to create a quick Use Case diagram to flush this out, but you could also use User Stories or similar methods.

For the case of the Canvas widget, I came up with three main use cases:

  1. A simple drawing app, where a user adds shapes, colors, and text to the screen.
  2. A vectoring drawing app, where a user draws lines and shapes, and then needs the ability to edit the nodes of the lines.
  3. A platformer game, where there is a lot of objects draw on the screen, including the hero. The hero needs its own drawing context so that they can run, jump, and move around without unintentionally modifying the rest of the objects.

Use Cases

The last part of Step 1 is to get feedback. I recommend creating a GitHub Issue or Draft Pull Request at this point and start to discuss the design of your widget abstraction with others and continue that discussion as you design your python API in step 2.

Step 2

Write Docs

  • Write your API documentation first
  • The API provides the set of clearly defined methods of communication (layers) between the software components
  • Documentation Driven Development
  • This is iterative with Step 1

With your Use Cases from Step 1, start your docs by explaining what your widget is and what it is used for. When looking at the Canvas widgets from my research, I noticed that the current drawing widgets were very procedural, you have to create your canvas drawing using many steps. For example, you have to first set the color to draw with, then draw an object, and then fill in that object.

Python has the context manager and the "with" statement, and making use of this for a canvas allows the user to better break up the draw operations with some nesting. It also allows for automatically starting or closing drawing of a closed path for the user. This is an example of the types of things that you can take advantage of in an API that was designed for Python. It is easy to try to copy the API that you are familiar with, but I think you can really make your widget stand out by taking a step back and looking at how you can make an API that users will really enjoy using.

Here is an example of writing the initial explanation and widget API for the canvas widget:

The canvas is used for creating a blank widget that you can
draw on.

Usage
--

Simple usage to draw a colored rectangle on the screen using
the arc drawing object:

import toga
canvas = toga.Canvas(style=Pack(flex=1))
with canvas.fill(color=rgb(200, 0, 0)) as fill:
    fill.rect(10, 10, 100, 100)

Once that is complete, now might be a good time to ask for feedback to see if you have missed any use cases or if others have any ideas of how to improve the public API of the widget. One way to collect feedback would be to submit an issue or a "work in progress" pull request to the Toga project, or ask for feedback on the Gitter channel.

Next, start to work out the structure of your Toga_core code based on your API. I recommend creating the class and method definitions and add the docstrings to outline what each portion of the software does and what arguments and return values it provides. This is part of the overall documentation that will be generated by Sphinx for your widget, and creating this before writing your code will provide the next level of API documentation.

Here is an example of how that structure and docstrings would look for a canvas widget:

class Canvas(Context, Widget):
    """Create new canvas.

    Args:
        id (str):  An identifier for this widget.
        style (:obj:`Style`): An optional style object. 
        factory (:obj:`module`): A python module that is
            capable to return a implementation of this class.

     """
def rect(self, x, y, width, height):
    """Constructs and returns a :class:`Rect <Rect>`.

    Args:
        x (float): x coordinate for the rectangle.
        ...
    """

Step 3

Implement your Toga_core widget using TDD

  • Write a test for each function of the widget outlined in the API from Step 3
  • Check that the tests fail
  • Specify the implementation layer API
  • Write the core code for the widget to call the implementation layer

Test Driven Development is a test-first technique to write your tests prior to or in parallel with writing the software. I am being opinionated here, because you don't have to write your code using this process. But, I think this will really help you think about what you want from the code as you implement these different API layers.

Toga_core has a "tests" folder, and this is where you need to create your tests for the widget. Sometimes it can be challenging to know what tests to write, but in the previous step you already outlined what the use cases and scenarios are for using your widget, and the API to make use of the widget. Break this up in to atomic tests to check that you can successfully create the widget, and then make use of and modify the widget using all of the outlined scenarios.

Here is a test to check that the widget is created. The canvas._impl.interface is testing the call to the Toga_impl component ("_impl") and then back to the Toga_core component ("interface"). In other words we are testing that the canvas object is the same object as we go to the Implementation layer to the Toga_impl and then back across the Implementation layer to the Toga_core. The object should be equal as long as it was created successfully. The second line of the test assertActionPerformed is using the dummy backend to test that the canvas was created, and I'll discuss that more in Step 4 below.

def test_widget_created():
    assertEqual(canvas._impl.interface, canvas)
    self.assertActionPerformed(canvas, "create Canvas")

Further along in my test creation I also wanted to check that the user could modify a widget that was already created. So I created a test that modifies the coordinates and size of a rectangle.

def test_rect_modify():
    rect = canvas.rect(-5, 5, 10, 15)
    rect.x = 5
    rect.y = -5
    rect.width = 0.5
    rect.height = -0.5
    canvas.redraw()
    self.assertActionPerformedWith(
            canvas, "rect", x=5, y=-5, width=0.5, height=-0.5
        )

Once you are done creating your tests and make sure that they are failing as expected, it is time to move on to filling in all of those Toga_core classes and objects that you left blank in the previous step.

Toga provides a base Widget class that all widgets derive from. It defines the interface for core functionality for children, styling, layout and ownership by specific App and Window. Below our class Canvas is derived from Widget and is initialized:

class Canvas(Widget):
    def __init__(self, id=None, style=None, factory=None):
        super().__init__(id=id, style=style, factory=factory)

As part of the class initialization, Toga also uses the factory method to determine the correct Toga_impl platform, and then connect it from the Toga_core to Toga_implself._impl and back the other way using interface=self:

        # Create a platform specific implementation of Canvas
        self._impl = self.factory.Canvas(interface=self)

Finally, we fill in our methods to call the creation of the rectangle on the Toga_impl component using the Implementation layer:

    def rect(self, x, y, width, height):
        self._impl.rect(
            self.x, self.y, self.width, self.height
        )

Step 4

Implement the Toga_impl widget on the dummy backend

  • Dummy is for automatic testing without a native platform
  • Code the implementation layer API endpoint, create a method for each call of the API
  • Check that all tests now pass

When your widget is integrated with Toga, we want unit tests to run with the test suite automatically during continuous integration. It may be difficult during these tests to start up every platform and check that your widget is working correctly, so there is a Toga_impl called dummy that doesn't require a platform at all. This allows for smoke testing to make sure that the widget correctly calling the Implementation layer API.

Now go ahead and implement the Toga_impl widget on the dummy backend. There needs to be methods for each call from the Toga_core to the Toga_impl. Below we check that the Canvas create and rect method actions were invoked through Implementation layer API calls.

class Canvas(Widget):
    def create(self):
        self._action("create Canvas")

    def rect(self, x, y, width, height):
        self._action(
            "rect", x=x, y=y, width=width, height=height
        )

You now should be able to run and pass all the tests that you created in Step 3.

Step 5

Implement the Toga_impl widget on your platform backend

  • Copy toga_dummy and create a new endpoint for the platform you chose in Step 1
  • Make use of the native interface API for this widget on your platform

If after your research in Step 1, you aren't feeling confident in how the widget should work on your platform, now would be a good time to take a break to go practice. Build a simple canvas drawing app for your platform using the native widgets. Once you have done that, now is the time to create the Toga_impl for your platform that calls those native widgets on your platform.

In my example, Gtk+ uses an event callback to do native drawing. So I create a Gtk.DrawingArea to draw on when my Canvas widget is created, and then I connect that drawing callback to the gtk_draw_callback function which then calls a method in Toga_core through the Implementation layer:

class Canvas(Widget):
    def create(self):
        self.native = Gtk.DrawingArea()
        self.native.interface = self.interface
        self.native.connect("draw", self.gtk_draw_callback)

    def gtk_draw_callback(self, canvas, gtk_context):
        self.interface._draw(self, draw_context=gtk_context)

Some platforms like Android or Cocoa will require transpiling or bridging to the native platform calls since those platforms using a different programming language. This may require the creation of extra native objects to, for example, reserve memory on those platforms. Here is an example of what this extra TogaCanvas class would like with the Cocoa platform:

class TogaCanvas(NSView):
    @objc_method
    def drawRect_(self, rect: NSRect) -> None:
        context = NSGraphicsContext.currentContext.graphicsPort()

Finally create each method for your native implementation. Below we create an implementation of the rectangle creation that calls Gtk+'s cairo drawing:

    def rect(self, x, y, width, height, draw_context):
        draw_context.rectangle(x, y, width, height)

Iterate

Iterate through steps 1-5 to complete your widget implementation

In the examples, we created a Canvas and a rectangle drawing operation on that canvas. Now it is time to iterate back through all the steps and implement all the other drawing operations that a Canvas needs like the other shapes, lines, and text. Once you finish this, you should now have a complete widget!

Toga Tutorial 4 for a Canvas Widget

Tada! You did it, Submit a PR!

I would be interested in how it goes for you, drop me a line with your experience creating a widget.

2018-11-10: Minor editorial updates.

2019-04-27: Split Toga Architecture diagram up to make it more clear.

2019-05-02: Improve description about research in Step 1. Add description of impl and interface in architecture section.