olasd's corner of the 'tubes

Illustration of a bird flying.
  • DebConf Video team sprint (and stuff) @ MiniDebCamp FOSDEM 2020

    Over the three days of the MiniDebCamp at FOSDEM, members of the DebConf Video team, and their friends, have assembled to participate in a sprint.

    MiniDebCamp @ FOSDEM 2020

    I’ve been (very pleasantly!) surprised by the number of people present at the MiniDebCamp, as well as the variety of topics they were working on. A great atmosphere, the welcoming environment provided by the HSBXL, and the low-key organization were something that I think other event organizers can get inspiration from: just get a room, and basic amenities (power, tables, seats, heating), and this will turn into a successful event!

    Debian hackers at the HSBXL
    Busy Debian hackers at the Brussels Hackerspace

    On to what the Video Team members (and their friends!) did:

    highvoltage

    Jonathan spent time setting up the Debian PeerTube instance to accept videos, as well as adapting our video upload scripts, which feed from our video metadata repository as well as our main video archive, to upload our videos to PeerTube.

    olasd

    My primary contribution to the Video Team sprint was bringing our team hardware from Paris for others to play with. I had a conversation with tzafrir to review the video team list of requirements for DebConf 2020, and I helped phls to get up to speed on our ansible setup.

    Most of my time at the MiniDebCamp was actually spent getting a local test setup for piuparts.d.o running, so I can be more confident making changes to Debian’s piuparts infrastructure, which I’ve adopted a few weeks ago, without breaking it too much (considering that piuparts is in the critical path for testing migration, bugs can become very noisy very quickly).

    phls

    Paulo spent time to get up to speed on our ansible setup; His goal was to become more confident setting up video recording and streaming for the upcoming MiniDebConf in Maceió at the end of March. He used this opportunity to re-master our MiniDebConf video mixing/recording PC under Buster, and using our current (post-DebConf19) ansible playbooks.

    tumbleweed

    Stefano worked on our NeTV2, a recent Open Hardware, FPGA-based HDMI device by bunnie that the Video Team acquired a few months ago. He worked, with the help of Tim from TimVideos, as well as others, on getting up to speed on the FPGA toolchain the board uses, and trying to get the recently developed PCI-e capture feature to work.

    …and friends of the Video Team!

    Doing a sprint in a “shared” context is an opportunity to get more collaboration going. I’ve been quite happy to see that Tim Ansell and folks from antmicro decided to join the MiniDebCamp to work on improving the cool open hardware video projects that we’re either currently or looking forward to using!

    Team-wide conversations

    Other members of the team (h01ger, ivodd, nattie, tzafrir, urbec, …) were present during the event. While most of spent their time was spent working on their other respective areas of interest (in no particular order, the organization of DebConf 2020, Release Team work, Reproducible Builds, …), they contributed to various video team-related conversations during the sprint.

    For instance, we’ve discussed the setup for DebConf2020, and we’ve brainstormed on a list of prospective hardware purchases, to try to make shipping our hardware to upcoming events in Europe and elsewhere easier.

    Now that we’re all at FOSDEM, we’ll surely come across some interesting topics for discussion!

    Acknowledgements

    Many thanks to the folks at HSBXL for hosting us, to Holger Levsen from following through with organizing the MiniDebCamp, to Kyle Robbertze for setting up the Video Sprint in the first place, even if he could not join us (we miss you paddatrapper!), and to the many donors and sponsors of the Debian project! Holding these sprints would be impossible without your support.

    February 1, 2020
  • It is complete!

    It is complete!

    (well, at least the front of it is)

    After last week’s DebConf Video Team sprint (thanks again to Jasper @ Linux Belgium for hosting us), the missing components for the stage box turned up right as I was driving back to Paris, and I could take the time to assemble them tonight.

    The final inventory of this 6U rack is, from top to bottom:

    • U1: 1 Shure UA844+SWB antenna and power distribution system
    • U2 and U3: 4 Shure BLX4R/S8 receivers
    • U4: 1 Numato Opsis, 1 Minnowboard Turbot, 1 Netgear Gigabit Ethernet Switch to do presenter laptop (HDMI) capture, all neatly packed in a rackmount case.
    • U5 and U6: 1 tray with 2 BLX1/S8 belt packs with MX153T earset microphones and 2 BLX2/SM58 handheld microphone assemblies

    This combination of audio and video equipment is all the things that the DebConf Video Team needs to have near the stage to record presentations in one room.

    The next step will be to get the back panel milled to properly mount connectors, so that we don’t have to take the back of the box apart to wire things up 🙂

    You may see this equipment in action at one of Debian’s upcoming events.

    February 5, 2019
  • Record number of uploads of a Debian package in an arbitrary 24-hour window

    Since Dimitri has given me the SQL virus I have a hard time avoiding opportunities for twisting my brain.

    Seeing the latest post from Chris Lamb made me wonder: how hard would it be to do better? Splitting by date is rather arbitrary (the split may even depend on the timezone you’re using when you’re doing the query), so let’s try to find out the maximum number of uploads that happened for each package in any 24 hour window.

    First, for each upload, we get how many uploads of the same package happened in the subsequent 24 hours.

    SELECT
      source,
      date,
      (
        SELECT
          count(*)
        FROM
          upload_history AS other_upload
        WHERE
          other_upload.source = first_upload.source
          AND other_upload.date >= first_upload.date
          AND other_upload.date < first_upload.date + '24 hours') AS count
      FROM
        upload_history AS first_upload

    For each source package, we want the maximum count of uploads in a 24 hour window.

    SELECT
      source,
      max(count)
    FROM
      upload_counts
    GROUP BY
      source

    We can then join both queries together, to get the 24-hour window in which the most uploads of a given source package has happened.

    WITH upload_counts AS (
      SELECT
        source,
        date,
        (
          SELECT
            count(*)
          FROM
            upload_history AS other_upload
          WHERE
            other_upload.source = first_upload.source
            AND other_upload.date >= first_upload.date
            AND other_upload.date < first_upload.date + '24 hours') AS count
        FROM
          upload_history AS first_upload
    )
    SELECT
      source,
      date,
      count
    FROM
      upload_counts
    INNER JOIN (
      SELECT
        source,
        max(count) AS max_uploads
      FROM
        upload_counts
      GROUP BY
        source
      ) AS m
      USING (source)
    WHERE
      count = max_uploads
      AND max_uploads >= 9
    ORDER BY
      max_uploads DESC,
      date ASC;
    

    The results are almost the ones Chris has found, but cl-sql and live-config now have one more upload than live-boot.

           source       |          date          | count 
    --------------------+------------------------+-------
     cl-sql             | 2004-04-17 03:34:52+00 |    14
     live-config        | 2010-07-15 17:19:11+00 |    14
     live-boot          | 2010-07-15 17:17:07+00 |    13
     zutils             | 2010-12-30 17:33:45+00 |    11
     belocs-locales-bin | 2005-03-20 21:05:44+00 |    10
     openerp-web        | 2010-12-30 17:32:07+00 |    10
     debconf            | 1999-09-25 18:52:37+00 |     9
     gretl              | 2000-06-16 18:53:11+00 |     9
     posh               | 2002-07-24 17:04:46+00 |     9
     module-assistant   | 2003-09-11 05:53:18+00 |     9
     live-helper        | 2007-04-20 18:16:38+00 |     9
     dxvk               | 2018-11-06 00:04:02+00 |     9
    (12 lines)

    Thanks to Adrian and Chris for the involuntary challenge!

    November 8, 2018
  • docker and 127.0.0.1 in /etc/resolv.conf

    I’ve been doing increasingly more stuff with docker these days, and I’ve bumped into an issue: all my systems use a local DNS resolver, one way or another: as I’m roaming and have some VPNs with split-horizon DNS, my laptop uses dnsmasq as configured by NetworkManager; as I want DNSSEC validation, my servers use a local unbound instance. (I’d prefer to use unbound everywhere, but I roam on some networks where the only DNS recursors that aren’t firewalled don’t support DNSSEC, and unbound is pretty bad at handling that unfortunately)

    Docker handles 127.0.0.1 in /etc/resolv.conf the following way: it ignores the entry (upstream discussion). When there’s no DNS servers left, it will fall back to using 8.8.8.8 and 8.8.4.4.

    This is all fine and dandy, if that’s the sort of thing you like (I don’t kinkshame), but if you don’t trust the owners of those DNS recursors, or if your network helpfully firewalls outbound DNS requests, you’ll likely want to use your own resolver instead.

    The upstream docker FAQ just tells people to disable the local resolver and/or hardcode some DNS entries in the docker config. That’s not very helpful when you roam and you really want to use your local resolver… To do things properly, you’ll have to set two bits of configuration: tell docker to use the local resolver, and tell your local resolver to listen on the docker interface.

    Docker DNS configuration

    First, you need to configure the docker daemon to use the host as DNS server: In the /etc/docker/daemon.json file, set the dns key to [“172.17.0.1”] (or whatever IP address your docker host is set to use).

    Local resolver: dnsmasq in NetworkManager

    When NetworkManager is set up to use dnsmasq, it runs with a configuration that’s built dynamically, and updated when the network settings change (e.g. to switch upstream resolvers according to DHCP or VPN settings).

    You can add drop-in configurations in the /etc/NetworkManager/dnsmasq.d directory. I have set the following configuration variables in a /etc/NetworkManager/dnsmasq.d/docker.conf file:

    # Makes dnsmasq bind to all interfaces (but still only accept queries on localhost as per NetworkManager configuration)
    bind-interfaces
    
    # Makes dnsmasq accept queries on 172.17.0.0/16
    listen-address=172.17.0.1

    Restarting NetworkManager brings up a new instance of dnsmasq, which will let Docker do its thing.

    Local resolver: unbound

    The same sort of configuration is done with unbound. We tell unbound to listen to all interfaces and to only accept recursion from localhost and the docker subnet. In /etc/unbound/unbound.conf.d/docker.conf:

    interface: 0.0.0.0
    interface: ::
    access-control: 127.0.0.0/8 allow
    access-control: ::1/128 allow
    access-control: 172.17.0.0/16 allow

    Restart unbound (reload doesn’t reopen the sockets) and your docker container will have access to your local resolver.

    Both these local resolver configurations should work even when the docker interface comes up after the resolver: we tell the resolver to listen to all interfaces, then only let it answer to clients on the relevant networks.

    Result

    Before:

    $ docker run busybox nslookup pypi.python.org
    Server: 8.8.8.8
    Address 1: 8.8.8.8 google-public-dns-a.google.com
    
    Name: pypi.python.org
    Address 1: 2a04:4e42::223
    Address 2: 2a04:4e42:200::223
    Address 3: 2a04:4e42:400::223
    Address 4: 2a04:4e42:600::223
    Address 5: 151.101.0.223
    Address 6: 151.101.64.223
    Address 7: 151.101.128.223
    Address 8: 151.101.192.223

    After:

    $ docker run busybox nslookup pypi.python.org
    Server: 172.17.0.1
    Address 1: 172.17.0.1
    
    Name: pypi.python.org
    Address 1: 2a04:4e42:1d::223
    Address 2: 151.101.16.223
    April 14, 2018
  • Report from Debian SnowCamp: day 3

    [Previously: day 1, day 2]

    Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

    As a starter, and on request from Valhalla, please enjoy an attempt at a group picture (unfortunately, missing a few people). Yes, the sun even showed itself for a few moments today!

    One of the numerous SnowCamp group pictures

    As for today’s activities… I’ve cheated a bit by doing stuff after sending yesterday’s report and before sleep: I reviewed some of Stefano’s dc18 pull requests; I also fixed papered over the debexpo uscan bug.

    After keeping eyes closed for a few hours, the day was then spent tickling the python-gitlab module, packaged by Federico, in an attempt to resolve https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=890594 in a generic way.

    The features I intend to implement are mostly inspired from jcowgill’s multimedia-cli:

    • per-team yaml configuration of “expected project state” (access level, hooks and other integrations, enablement of issues, merge requests, CI, …)
    • new repository creation (according to a team config or a personal config, e.g. for collab-main the Debian group)
    • audit of project configurations
    • mass-configuration changes for projects

    There could also be some use for bits of group management, e.g. to handle the access control of the DebConf group and its subgroups, although I hear Ganneff prefers shell scripts.

    My personal end goal is to (finally) do the 3D printer team repository migration, but e.g. the Python team would like to update configuration of all repos to use the new KGB hook instead of irker, so some generic interest in the tool exists.

    As the tool has a few dependencies (because I really have better things to do than reimplement another wrapper over the GitLab API) I’m not convinced devscripts is the right place for it to live… We’ll see when I have something that does more than print a list of projects to show!

    In the meantime, I have the feeling Stefano has lined up a new batch of DebConf website pull requests for me, so I guess that’s what I’m eating for breakfast “tomorrow”… Stay tuned!

    My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

    February 25, 2018
  • Report from Debian SnowCamp: day 2

    [Previously: day 1]

    Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

    Today’s pièce de résistance was the long overdue upgrade of the machine hosting mentors.debian.net to (jessie then) stretch. We’ve spent most of the afternoon doing the upgrades with Mattia.

    The first upgrade to jessie was a bit tricky because we had to clean up a lot of cruft that accumulated over the years. I even managed to force an unexpected database restore test ?. After a few code fixes, and getting annoyed at apache2.4 for ignoring VirtualHost configs that don’t end with .conf (and losing an hour of debugging time in the process…), we managed to restore the functonality of the website.

    We then did the stretch upgrade, which was somewhat smooth sailing in comparison… We had to remove some functionality which depended on packages that didn’t make it to stretch: fedmsg, and the SOAP interface. We also noticed that the gpg2 transition completely broke the… “interesting” GPG handling of mentors… An install of gnupg1 later everything should be working as it was before.

    We’ve also tried to tackle our current need for a patched FTP daemon. To do so, we’re switching the default upload queue directory from / to /pub/UploadQueue/. Mattia has submitted bugs for dput and dupload, and will upload an updated dput-ng to switch the default. Hopefully we can do the full transition by the next time we need to upgrade the machine.

    Known bugs: the uscan plugin now fails to parse the uscan output… But at least it “supports” version=4 now ?

    Of course, we’re still sorely lacking volunteers who would really care about mentors.debian.net; the codebase is a pile of hacks upon hacks upon hacks, all relying on an old version of a deprecated Python web framework. A few attempts have been made at a smooth transition to a more recent framework, without really panning out, mostly for lack of time on the part of the people running the service. I’m still convinced things should restart from scratch, but I don’t currently have the energy or time to drive it… Ugh.

    More stuff will happen tomorrow, but probably not on mentors.debian.net. See you then!

    My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

    February 24, 2018
  • Report from Debian SnowCamp: day 1

    Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

    This morning, I arrived in Milan at “omfg way too early” (5:30AM, thanks to a 30 minute early (!) night train), and used the opportunity to walk the empty streets around the Duomo while the Milanese .oO(mapreri) were waking up. This gave me the opportunity to take very nice pictures of monuments without people, which is always appreciated!

     

    After a short train ride to Laveno, we arrived at the Hostel at around 10:30. Some people had already arrived the day before, so there already was a hacking kind of mood in the air.  I’d post a panorama but apparently my phone generated a corrupt JPEG ?

    After rearranging the tables in the common spaces to handle power distribution correctly (♥ Gaffer Tape), we could start hacking!

    Today’s efforts were focused on the DebConf website: there were a bunch of pull requests made by Stefano that I reviewed and merged:

    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/23
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/22
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/19
    • [DebConf team meeting: no merges while the bot is logging ?]
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/24
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/25
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/27
    • https://salsa.debian.org/debconf-team/public/websites/dc18/merge_requests/28

    I’ve also written a modicum of code.

    Finally, I have created the Debian 3D printing team on salsa in preparation for migrating our packages to git. But now is time to do the sleep thing. See you tomorrow?

    My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

    February 22, 2018
  • Listing and loading of Debian repositories: now live on Software Heritage

    Listing and loading of Debian repositories: now live on Software Heritage

    Software Heritage is the project for which I’ve been working during the past two and a half years now. The grand vision of the project is to build the universal software archive, which will collect, preserve and share the Software Commons.

    Today, we’ve announced that Software Heritage is archiving the contents of Debian daily. I’m reposting this article on my blog as it will probably be of interest to readers of Planet Debian.

    TL;DR: Software Heritage now archives all source packages of Debian as well as its security archive daily. Everything is ready for archival of other Debian derivatives as well. Keep on reading to get details of the work that made this possible.

    History

    When we first announced Software Heritage, back in 2016, we had archived the historical contents of Debian as present on the snapshot.debian.org service, as a one-shot proof of concept import.

    This code was then left in a drawer and never touched again, until last summer when Sushant came do an internship with us. We’ve had the opportunity to rework the code that was originally written, and to make it more generic: instead of the specifics of snapshot.debian.org, the code can now work with any Debian repository. Which means that we could now archive any of the numerous Debian derivatives that are available out there.

    This has been live for a few months, and you can find Debian package origins in the Software Heritage archive now.

    Mapping a Debian repository to Software Heritage

    The main challenge in listing and saving Debian source packages in Software Heritage is mapping the content of the repository to the generic source history data model we use for our archive.

    Organization of a Debian repository

    Before we start looking at a bunch of unpacked Debian source packages, we need to know how a Debian repository is actually organized.

    At the top level of a Debian repository lays a set of suites, representing versions of the distribution, that is to say a set of packages that have been tested and are known to work together. For instance, Debian currently has 6 active suites, from wheezy (“old old stable” version), all the way up to experimental; Ubuntu has 8, from precise (12.04 LTS), up to bionic (the future 18.04 release), as well as a devel suite. Each of those suites also has a bunch of “overlay” suites, such as backports, which are made available in the archive alongside full suites.

    Under the suites, there’s another level of subdivision, which Debian calls components, and Ubuntu calls areas. Debian uses its components to segregate packages along licensing terms (main, contrib and non-free), while Ubuntu uses its areas to denote the level of support of the packages (main, universe, multiverse, …).

    Finally, components contain source packages, which merge upstream sources with distribution-specific patches, as well as machine-readable instructions on how to build the package.

    Organization of the Software Heritage archive

    The Software Heritage archive is project-centric rather than version-centric. What this means is that we are interested in keeping the history of what was available in software origins, which can be thought of as a URL of a repository containing software artifacts, tagged with a type representing the means of access to the repository.

    For instance, the origin for the GitHub mirror of the Linux kernel repository has the following data:

    • type: git
    • url: https://github.com/torvalds/linux

    For each visit of an origin, we take a snapshot of all the branches (and tagged versions) of the project that were visible during that visit, complete with their full history. See for instance one of the latest visits of the Linux kernel. For the specific case of GitHub, pull requests are also visible as virtual branches, so we fetch those as well (as branches named refs/pull/<pull request number>/head).

    Bringing them together

    As we’ve seen, Debian archives (just as well as archives for other “traditional” Linux distributions) are release-centric rather than package-centric. Mapping distributions to the Software Heritage archive therefore takes a little bit of gymnastics, to transpose the list of source packages available in each suite to a list of available versions per source package. We do this step by step:

    1. Download the Sources indices for all the suites and components known in the Debian repository
    2. Parse the Sources indices, listing all source packages inside
    3. For each source package, tell the Debian loader to load all the available versions (grouped by name), generating a complete snapshot of the state of the source package across the Debian repository

    The source packages are mapped to origins using the following format:

    • type: deb
    • url: deb://<repository name>/packages/<source package name> (e.g. deb://Debian/packages/linux)

    We use a repository name rather than the actual URL to a repository so that links can persist even if a given mirror disappears.

    Loading Debian source packages

    To load Debian source packages into the Software Heritage archive, we have to convert them: Debian-based distributions distribute source packages as a set of files, a dsc (Debian Source Control) and a set of tarballs (usually, an upstream tarball and a Debian-specific overlay). On the other hand, Software Heritage only stores version-control information such as revisions, directories, files.

    Unpacking the source packages

    Our philosophy at Software Heritage is to store the source code of software in the precise form that allows a developer to start working on it. For Debian source packages, this is the unpacked source code tree, with all patches applied. After checking that the files we have downloaded match the checksums published in the index files, we simply use dpkg-source -x to extract the source package, with patches applied, ready to build. This also means that we currently fail to import packages that don’t extract with the version of dpkg-source available in Debian Stretch.

    Generating a synthetic revision

    After walking the extracted source package tree, computing identifiers for all its contents, we get the identifier of the top-level tree, which we will reference in the synthetic revision.

    The synthetic revision contains the “reproducible” metadata that is completely intrinsic to the Debian source package. With the current implementation, this means:

    • the author of the package, and the date of modification, as referenced in the last entry of the source package changelog (referenced as author and committer)
    • the original artifact (i.e. the information about the original source package)
    • basic information about the history of the package (using the parsed changelog)

    However, we never set parent revisions in the synthetic commits, for two reasons:

    • there is no guarantee that packages referenced in the changelog have been uploaded to the distribution, or imported by Software Heritage (our update frequency is lower than that of the Debian archive)
    • even if this guarantee existed, and all versions of all packages were available in Software Heritage, there would be no guarantee that the version referenced in the changelog is indeed the version we imported in the first place

    This makes the information stored in the synthetic revision fully intrinsic to the source package, and reproducible. In turn, this allows us to keep a cache, mapping the original artifacts to synthetic revision ids, to avoid loading packages again once we have loaded them once.

    Storing the snapshot

    Finally, we can generate the top-level object in the Software Heritage archive, the snapshot. For instance, you can see the snapshot for the latest visit of the glibc package.

    To do so, we generate a list of branches by concatenating the suite, the component, and the version number of each detected source package (e.g. stretch/main/2.24-10 for version 2.24-10 of the glibc package available in stretch/main). We then point each branch to the synthetic revision that was generated when loading the package version.

    In case a version of a package fails to load (for instance, if the package version disappeared from the mirror between the moment we listed the distribution, and the moment we could load the package), we still register the branch name, but we make it a “null” pointer.

    There’s still some improvements to make to the lister specific to Debian repositories: it currently hardcodes the list of components/areas in the distribution, as the repository format provides no programmatic way of eliciting them. Currently, only Debian and its security repository are listed.

    Looking forward

    We believe that the model we developed for the Debian use case is generic enough to capture not only Debian-based distributions, but also RPM-based ones such as Fedora, Mageia, etc. With some extra work, it should also be possible to adapt it for language-centric package repositories such as CPAN, PyPI or Crates.

    Software Heritage is now well on the way of providing the foundations for a generic and unified source browser for the history of traditional package-based distributions.

    We’ll be delighted to welcome contributors that want to lend a hand to get there.

    February 20, 2018
  • DebConf 17 bursaries: update your status now!

    TL;DR: if you applied for a DebConf 17 travel bursary, and you haven’t accepted it yet, login to the DebConf website and update your status before June 20th or your bursary grant will be gone.

    *blows dust off the blog*

    As you might be aware, DebConf 17 is coming soon and it’s gonna be the biggest DebConf in Montréal ever.

    Of course, what makes DebConf great is the people who come together to work on Debian, share their achievements, and help draft our cunning plans to take over the world. Also cheese. Lots and lots of cheese.

    To that end, the DebConf team had initially budgeted US$40,000 for travel grants ($30,000 for contributors, $10,000 for diversity and inclusion grants), allowing the bursaries team to bring people from all around the world who couldn’t have made it to the conference.

    Our team of volunteers rated the 188 applications, we’ve made a ranking (technically, two rankings : one on contribution grounds and one on D&I grounds), and we finally sent out a first round of grants last week.

    After the first round, the team made a new budget assessment, and thanks to the support of our outstanding sponsors, an extra $15,000 has been allocated for travel stipends during this week’s team meeting, with the blessing of the DPL.

    We’ve therefore been able to send a second round of grants today.

    Now, if you got a grant, you have two things to do : you need to accept your grant, and you need to update your requested amount. Both of those steps allow us to use our budget more wisely: having grants expire frees money up to get more people to the conference earlier. Having updated amounts gives us a better view of our overall budget. (You can only lower your requested amount, as we can’t inflate our budget)

    Our system has sent mails to everyone, but it’s easy enough to let that email slip (or to not receive it for some reason). It takes 30 seconds to look at the status of your request on the DebConf 17 website, and even less to do the few clicks needed for you to accept the grant. Please do so now! OK, it might take a few minutes if your SSO certificate has expired and you have to look up the docs to renew it.

    The deadline for the first round of travel grants (which went out last week) is June 20th. The deadline for the second round (which went out today) is June 24th. If somehow you can’t login to the website before the deadline, the bursaries team has an email address you can use.

    We want to send out a third round of grants on June 25th, using the money people freed up: our current acceptance ratio is around 40%, and a lot of very strong applications have been deferred. We don’t want them to wait up until July to get a definitive answer, so thanks for helping us!

    À bientôt à Montréal !

    June 14, 2017
  • Debian is now welcoming applicants for Outreachy and GSoC Summer 2015

    Hi all,

    I am delighted to announce that Debian will be participating in the next round of Outreachy and GSoC, and that we are currently welcoming applications!

    outreachy-logoOutreachy helps people from groups underrepresented in free and open source software get involved. The current round of internships is open to women (cis and trans), trans men, genderqueer people, and all participants of the Ascend Project regardless of gender.

    gsoc2015-300x270Google Summer of Code is a global program, sponsored by Google, that offers post-secondary student developers ages 18 and older stipends to write code for various open source software projects.

    Interns for both programs are granted a $5500 stipend (in three installments) allowing them to dedicate their summer to working full-time on Debian.

    Our amazing team of mentors has listed their project ideas on the Debian wiki, and we are now welcoming applicants for both programs.

    If you want to apply for an internship with Debian this summer, please fill out the template for either Outreachy or GSoC. If you’re eligible to both programs, we’ll encourage you to apply to both (using the same application), as Debian only has funds for a single Outreachy intern this round.

    Don’t wait up! The application period for Outreachy ends March 24th, and the GSoC application period ends March 27th. We really want applicants to start contributing to their project before making our selection, so that mentors can get a feel of how working with their intern will be like for three months. The small task is a requirement for Outreachy, and we’re strongly encouraging GSoC applicants to abide by that rule too. To contribute in the best conditions, you shouldn’t wait for the last minute to apply 🙂

    I hope we’ll work with a lot of great interns this summer. If you think you’re up for the challenge, it’s time to apply! If you have any doubts, or any question, drop us a line on the soc-coordination mailing list or come by on our IRC channel (#debian-soc on irc.debian.org) and we’ll do our best to guide you.

    March 8, 2015
1 2
Next Page→

olasd's corner of the 'tubes

Proudly powered by WordPress