working on it ...

Filters

snippets
926k
followers
35
Published by snip2code

MarkDown

This channel collects useful snippets for MarkDown language
Sort by

Found 926k snippets

    public by yourfriendcaspian modified Sep 2, 2017  167  1  2  0

    Instructions for installing the most popular webdrivers, and then the latest version of the standalone Selenium server

    Instructions for installing the most popular webdrivers, and then the latest version of the standalone Selenium server: selenium-instructions.md
    # Steps For Setting Up Selenium And The Webdrivers
    
    ### Install The Firefox Geckodriver
    
    * Download [the latest Geckodriver for Firefox](https://github.com/mozilla/geckodriver/releases)
    * then `mv` that file to `/usr/local/bin/geckodriver` and `sudo chmod +x /usr/local/bin/geckodriver`
    * make sure you have `"webdriver.firefox.profile" : "geckodriver",` in your `nightwatch.json` file if you are using it
    
    ### Install The Chromedriver
    
    * Download the latest version [from the Chrome site](https://sites.google.com/a/chromium.org/chromedriver/downloads)
    * unzip it if it is a zip file
    * then `mv` that file to `/usr/local/bin/chromedriver` and `sudo chmod +x /usr/local/bin/chromedriver`
    
    ### Install the Safari Driver
    
    * Download the `SafariDriver.safariextz` [from the release site](http://selenium-release.storage.googleapis.com/index.html?path=2.45/)
    * Double click on the file and it will open in Safari
    * Accept the file as trusted
    * It will now show in your extensions
    
    ### Build the latest Selenium binary
    
    * `git clone git@github.com:SeleniumHQ/selenium.git`
    * `cd selenium`
    * `./go clean release`
    * `cd build/dist`
    * You can now run the server with the following: `java -jar selenium-server-standalone-3.0.0-beta1.jar`
    * _you may have a server of a different name depending on when you read this tutorial_
    
    ### Running the server
    
    * cd to the directory where you build the jar file
    * run: `java -jar selenium-server-standalone-3.0.0-beta1.jar`
    
    You can also alias the function in a `~/.bashrc` or `~/.zshrc` with:
    
    ```sh
    alias selenium="java -jar /path/to/build/dist/folder/selenium-server-standalone-3.0.0-beta1.jar"
    ```
    
    Remember: _You may have a server of a different name depending on when you read this tutorial_
    
    

    public by yourfriendcaspian modified Sep 2, 2017  140  0  2  0

    Cómo instalar MySql en Debian/derivados, crear Bases de Datos y Usuarios para usarlos en Django

    Cómo instalar MySql en Debian/derivados, crear Bases de Datos y Usuarios para usarlos en Django: Mysql and Django.md
    Instalar Mysql con Python y Django Debian/Derivados 
    =====================
     
     
    Para instalar necesitamos tener unas dependencias en el sistema por ahora mostraremos en equipos **Debian y Derivados**. Pero primero instalares actualizaciones y MySQL
     
     
    ----------
    Actualizaciones y MySQL
    ---------
     
    **Actualizar Sistema** lo hacemos con los siguientes comandos
     
    ```
    $ sudo apt-get update
    $ sudo apt-get upgrade
    ```
     
    > **NOTA:** Cada Sistema tiene sus comandos para actualización, si es que tu máquina no es derivada de Debian **buscalos** :D
     
    #### <i class="icon-file"></i> Instalación de MySQL
     
    Instalamos Mysql (5.5.*)
    ```
    $ sudo apt-get install mysql-server mysql-client
    Passwd for 'root' user: mypasswd
    ```
    Al final ejecutamos este comando para darle mas seguridad a nuestra BD
     
    ```
    $ mysql_secure_installation
    ```
    Revisar atentanmente los cambios que se harán, la primera pregunta es el passwd root
    SI desea mantenerla o cambiarla, y sigue con otras preguntas de seguridad.
     
     
    #### <i class="icon-folder-open"></i> Crear una base de datos y un usuario para la BD
     
    Ahora crearemos la BD a la que se conectara DJango y un Usuario con Passwd para que acceda a ella.
    Existen dos maneras de hacerlo:
    ```
    echo "CREATE DATABASE DATABASENAME;" | mysql -u root -p
    echo "CREATE USER 'DATABASEUSER'@'localhost' IDENTIFIED BY 'PASSWORD';" | mysql -u root -p
    echo "GRANT ALL PRIVILEGES ON DATABASENAME.* TO 'DATABASEUSER'@'localhost';" | mysql -u root -p
    echo "FLUSH PRIVILEGES;" | mysql -u root -p
    ```
    Así deberan poner su passwd de mysql en cada línea ó también pueden hacerlo de la siguiente manera
     
    ```
    $ mysql -u root -p
    ```
    Introducen su passwd y a continuación hacen lo siguiente.
     
    ```
    CREATE DATABASE DATABASENAME;
    CREATE USER 'DATABASEUSER'@localhost IDENTIFIED BY 'PASSWORD';
    GRANT ALL PRIVILEGES ON DATABASENAME.* TO 'DATABASEUSER'@localhost;
    FLUSH PRIVILEGES;
    exit
    ```
     
    #### <i class="icon-pencil"></i> Verificamos Dependencias
     
    Sólo hay unas cuantas dependencias pero hay que estar seguros
     
    ```
    $ sudo apt-get install libmysqlclient-dev python-dev
    ```
     
    #### <i class="icon-trash"></i> Instalando driver mysql-python con PIP
     
    Hasta aquí es todo sólo procedemos a instalar Pip en nuestro entorno virtual o globalmente
     
    ```
    $ sudo pip install mysql-python
    ```
     
     
    ----------
    <i class="icon-hdd"></i> Resumen
    ---------
    Como podrás ver ahora puedes crear DB y Usuarios para cada Proyecto de Django.
     
    
     
     
     
     
    
    
    

    public by yourfriendcaspian modified Sep 2, 2017  125  1  2  0

    Unattended Ubuntu Install- Download a non graphical Ubuntu installation ISO

    Unattended Ubuntu Install- Download a non graphical Ubuntu installation ISO: ubuntu_unattended_install.md
    https://askubuntu.com/a/122506/209043
    
    
    wget http://www.instalinux.com/download/iso1132.iso -O /root/iso1132.iso
    wget http://www.instalinux.com/download/preseed1132.txt -O preseed.cfg
    
    
    ---
    
    
    The complete solution is:
    
    Remaster a CD, ie, download a non graphical ubuntu installation ISO (server or alternate installation CD), mount it
    
        $ sudo su -
        # mkdir -p /mnt/iso
        # mount -o loop ubuntu.iso /mnt/iso
    
    Copy the relevant files to a different directory
    
        # mkdir -p /opt/ubuntuiso
        # cp -rT /mnt/iso /opt/ubuntuiso
    
    Prevent the language selection menu from appearing
    
        # cd /opt/ubuntuiso
        # echo en >isolinux/lang
    
    Use GUI program to add a kickstart file named `ks.cfg`
    
        # apt-get install system-config-kickstart
        # system-config-kickstart # save file to ks.cfg
    
    To add packages for the installation, add a `%package` section to the `ks.cfg` kickstart file, append to the end of `ks.cfg` file something like this.
    
        %packages
        @ ubuntu-server
        openssh-server
        ftp
        build-essential
    
    This will install the ubuntu-server "bundle", and will add the `openssh-server`, `ftp` and `build-essential` packages.
    
    Add a preseed file, to suppress other questions
    
        # echo 'd-i partman/confirm_write_new_label boolean true
        d-i partman/choose_partition \
        select Finish partitioning and write changes to disk
        d-i partman/confirm boolean true' > ks.preseed
    
    Set the boot command line to use the kickstart and preseed files
    
        # vi isolinux/txt.cfg
    
    Search for
    
        label install
          menu label ^Install Ubuntu Server
          kernel /install/vmlinuz
          append  file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet --
    
    add `ks=cdrom:/ks.cfg` and `preseed/file=/cdrom/ks.preseed` to the append line. You can remove the `quiet` and `vga=788` words. It should look like
    
          append file=/cdrom/preseed/ubuntu-server.seed \
             initrd=/install/initrd.gz \
             ks=cdrom:/ks.cfg preseed/file=/cdrom/ks.preseed --
    
    Now create a new iso
    
        # mkisofs -D -r -V "ATTENDLESS_UBUNTU" \
             -cache-inodes -J -l -b isolinux/isolinux.bin \
             -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 \
             -boot-info-table -o /opt/autoinstall.iso /opt/ubuntuiso
    
    That's it. You'll have a CD that would install an Ubuntu system once you boot from it, without requiring a single keystroke.
    
    
    ---
    
    
    To bypass the need to press enter on boot change the timeout value from `0` to `10` in `/isolinux/isolinux.cfg`: timeout `10` Note that a value of `10` represents `1` second
    
    
    
    ---
    
    # Last
    
    mkisofs -D -r -V "ATTENDLESS_UBUNTU"      -cache-inodes -J -l -b isolinux.bin      -c boot.cat -no-emul-boot -boot-load-size 4      -boot-info-table -o /root/autoinstall.iso /opt/ubuntuiso
    
    
    
    

    public by yourfriendcaspian modified Sep 2, 2017  118  3  2  0

    Raspberry Pi VPN Router

    Raspberry Pi VPN Router: raspberry-pi-vpn-router.md
    # Raspberry Pi VPN Router
    
    This is a quick-and-dirty guide to setting up a Raspberry Pi as a "[router on a stick](https://en.wikipedia.org/wiki/One-armed_router)" to [PrivateInternetAccess](http://privateinternetaccess.com/) VPN.
    
    ## Requirements
    
    Install Raspbian Jessie (`2016-05-27-raspbian-jessie.img`) to your Pi's sdcard.
    
    Use the **Raspberry Pi Configuration** tool or `sudo raspi-config` to:
    
    * Expand the root filesystem and reboot
    * Boot to commandline, not to GUI
    * Configure the right keyboard map and timezone
    * Configure the Memory Split to give 16Mb (the minimum) to the GPU
    * Consider overclocking to the Medium (900MHz) setting on Pi 1, or High (1000MHz) setting on Pi 2
    
    ## IP Addressing
    
    My home network is setup as follows:
    
    * Internet Router: `192.168.1.1`
    * Subnet Mask: `255.255.255.0`
    * Router gives out DHCP range: `192.168.100-200`
    
    If your network range is different, that's fine, use your network range instead of mine.
    
    I'm going to give my Raspberry Pi a static IP address of `192.168.1.2` by configuring `/etc/network/interfaces` like so:
    
    ~~~
    auto lo
    iface lo inet loopback
    
    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
        address 192.168.1.2
        netmask 255.255.255.0
        gateway 192.168.1.1
        dns-nameservers 8.8.8.8 8.8.4.4
    ~~~
    
    You can use WiFi if you like, there are plenty tutorials around the internet for setting that up, but this should do:
    
    ~~~
    auto lo
    iface lo inet loopback
    
    auto eth0
    allow-hotplug eth0
    iface eth0 inet manual
    
    auto wlan0
    allow-hotplug wlan0
    iface wlan0 inet static
        wpa-ssid "Your SSID"
        wpa-psk  "Your Password"
        address 192.168.1.2
        netmask 255.255.255.0
        gateway 192.168.1.1
        dns-nameservers 8.8.8.8 8.8.4.4
    ~~~
    
    You only need one connection into your local network, don't connect both Ethernet and WiFi. I recommend Ethernet if possible.
    
    ## NTP
    
    Accurate time is important for the VPN encryption to work. If the VPN client's clock is too far off, the VPN server will reject the client.
    
    You shouldn't have to do anything to set this up, the `ntp` service is installed and enabled by default.
    
    Double-check your Pi is getting the correct time from internet time servers with `ntpq -p`, you should see at least one peer with a `+` or a `*` or an `o`, for example:
    
    ~~~
    $ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    -0.time.xxxx.com 104.21.137.30    2 u   47   64    3  240.416    0.366   0.239
    +node01.jp.xxxxx 226.252.532.9    2 u   39   64    7  241.030   -3.071   0.852
    *t.time.xxxx.net 104.1.306.769    2 u   38   64    7  127.126   -2.728   0.514
    +node02.jp.xxxxx 250.9.592.830    2 u    8   64   17  241.212   -4.784   1.398
    ~~~
    
    ## Setup VPN Client
    
    Install the OpenVPN client:
    
    ~~~
    sudo apt-get install openvpn
    ~~~
    
    Download and uncompress the PIA OpenVPN profiles:
    
    ~~~
    wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
    sudo apt-get install unzip
    unzip openvpn.zip -d openvpn
    ~~~
    
    Copy the PIA OpenVPN certificates and profile to the OpenVPN client:
    
    ~~~
    sudo cp openvpn/ca.rsa.2048.crt openvpn/crl.rsa.2048.pem /etc/openvpn/
    sudo cp openvpn/Japan.ovpn /etc/openvpn/Japan.conf
    ~~~
    
    You can use a diffrent VPN endpoint if you like. Note the extension change from **ovpn** to **conf**.
    
    Create `/etc/openvpn/login` containing only your username and password, one per line, for example:
    
    ~~~
    user12345678
    MyGreatPassword
    ~~~
    
    Change the permissions on this file so only the root user can read it:
    
    ~~~
    sudo chmod 600 /etc/openvpn/login
    ~~~
    
    Setup OpenVPN to use your stored username and password by editing the the config file for the VPN endpoint:
    
    ~~~
    sudo nano /etc/openvpn/Japan.conf
    ~~~
    
    Change the following lines so they go from this:
    
    ~~~
    ca ca.rsa.2048.crt
    auth-user-pass
    crl-verify crl.rsa.2048.pem
    ~~~
    
    To this:
    
    ~~~
    ca /etc/openvpn/ca.rsa.2048.crt
    auth-user-pass /etc/openvpn/login
    crl-verify /etc/openvpn/crl.rsa.2048.pem
    ~~~
    
    ## Test VPN
    
    At this point you should be able to test the VPN actually works:
    
    ~~~
    sudo openvpn --config /etc/openvpn/Japan.conf
    ~~~
    
    If all is well, you'll see something like:
    
    ~~~
    $ sudo openvpn --config /etc/openvpn/Japan.conf 
    Sat Oct 24 12:10:54 2015 OpenVPN 2.3.4 arm-unknown-linux-gnueabihf [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Dec  5 2014
    Sat Oct 24 12:10:54 2015 library versions: OpenSSL 1.0.1k 8 Jan 2015, LZO 2.08
    Sat Oct 24 12:10:54 2015 UDPv4 link local: [undef]
    Sat Oct 24 12:10:54 2015 UDPv4 link remote: [AF_INET]123.123.123.123:1194
    Sat Oct 24 12:10:54 2015 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
    Sat Oct 24 12:10:56 2015 [Private Internet Access] Peer Connection Initiated with [AF_INET]123.123.123.123:1194
    Sat Oct 24 12:10:58 2015 TUN/TAP device tun0 opened
    Sat Oct 24 12:10:58 2015 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
    Sat Oct 24 12:10:58 2015 /sbin/ip link set dev tun0 up mtu 1500
    Sat Oct 24 12:10:58 2015 /sbin/ip addr add dev tun0 local 10.10.10.6 peer 10.10.10.5
    Sat Oct 24 12:10:59 2015 Initialization Sequence Completed
    ~~~
    
    Exit this with **Ctrl+c**
    
    ## Enable VPN at boot
    
    ~~~
    sudo systemctl enable openvpn@Japan
    ~~~
    
    ## Setup Routing and NAT
    
    Enable IP Forwarding:
    
    ~~~
    echo -e '\n#Enable IP Routing\nnet.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p
    ~~~
    
    Setup NAT fron the local LAN down the VPN tunnel:
    
    ~~~
    sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
    sudo iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
    ~~~
    
    Make the NAT rules persistent across reboot:
    
    ~~~
    sudo apt-get install iptables-persistent
    ~~~
    
    The installer will ask if you want to save current rules, select **Yes**
    
    If you don't select yes, that's fine, you can save the rules later with `sudo netfilter-persistent save`
    
    Make the rules apply at startup:
    
    ~~~
    sudo systemctl enable netfilter-persistent
    ~~~
    
    ## VPN Kill Switch
    
    This will block outbound traffic from the Pi so that only the VPN and related services are allowed.
    
    Once this is done, the only way the Pi can get to the internet is over the VPN.
    
    This means if the VPN goes down, your traffic will just stop working, rather than end up routing over your regular internet connection where it could become visible.
    
    ~~~
    sudo iptables -A OUTPUT -o tun0 -m comment --comment "vpn" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p icmp -m comment --comment "icmp" -j ACCEPT
    sudo iptables -A OUTPUT -d 192.168.1.0/24 -o eth0 -m comment --comment "lan" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 1198 -m comment --comment "openvpn" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p tcp -m tcp --sport 22 -m comment --comment "ssh" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 123 -m comment --comment "ntp" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 53 -m comment --comment "dns" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p tcp -m tcp --dport 53 -m comment --comment "dns" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -j DROP
    ~~~
    
    And save so they apply at reboot:
    
    ~~~
    sudo netfilter-persistent save
    ~~~
    
    If you find traffic on your other systems stops, then look on the Pi to see if the VPN is up or not.
    
    You can check the status and logs of the VPN client with:
    
    ~~~
    sudo systemctl status openvpn@Japan
    sudo journalctl -u openvpn@Japan
    ~~~
    
    ## Configure Other Systems on the LAN
    
    Now we're ready to tell other systems to send their traffic through the Raspberry Pi.
    
    Configure other systems' network so they are like:
    
    * Default Gateway: Pi's static IP address (eg: `192.168.1.2`)
    * DNS: Something public like Google DNS (`8.8.8.8` and `8.8.4.4`)
    
    Don't use your existing internet router (eg: `192.168.1.1`) as DNS, or your DNS queries will be visible to your ISP and hence may be visible to organizations who wish to see your internet traffic.
    
    ## Optional: DNS on the Pi
    
    To ensure all your DNS goes through the VPN, you could install `dnsmasq` on the Pi to accept DNS requests from the local LAN and forward requests to external DNS servers.
    
    ~~~
    sudo apt-get install dnsmasq
    ~~~
    
    You may now configure the other systems on the LAN to use the Pi (`192.168.1.2`) as their DNS server as well as their gateway.
    
    

    public by Vikram Dutt modified Aug 18, 2017  19  0  1  0

    Task-based concurrency manifesto draft

    Task-based concurrency manifesto draft: TaskConcurrencyManifesto.md
    # Concurrency in Swift: One possible approach
    
    * Author: [Chris Lattner](https://github.com/lattner)
    
    ## Introduction
    
    This document is published in the style of a "Swift evolution manifesto", outlining a long-term
    view of how to tackle a very large problem.  It explores *one possible* approach to adding
    a first-class concurrency model to Swift, in an effort to catalyze positive discussion that leads
    us to a best-possible design.  As such, it isn't an approved or finalized design
    prescriptive of what Swift will end up adopting.  It is the job of public debate on the open
    source [swift-evolution mailing list](https://github.com/apple/swift-evolution) to discuss and
    iterate towards that ultimate answer, and we may end up with a completely different approach.
    
    We focus on task-based concurrency abstractions commonly encountered in client
    and server applications, particularly those that are highly event driven (e.g. responding
    to UI events or requests from clients).  This does not attempt to be a comprehensive survey
    of all possible options, nor does it attempt to solve all possible problems in the space
    of concurrency.
    Instead, it outlines a single coherent design thread that can be built over the span of years to
    incrementally drive Swift to further greatness.
    
    ### Concurrency in Swift 1...4
    
    So far, Swift was carefully designed to avoid most concurrency topics, because we specifically did
    not want to cut off any future directions.  Instead, Swift programmers use OS abstractions (like
    GCD, pthreads, etc) to start and manage tasks.  The design of GCD and Swift's trailing
    closure syntax fit well together, particularly after the major update to the GCD APIs in Swift 3.
    
    While Swift has generally stayed away from concurrency topics, it has made some
    concessions to practicality.  For example, ARC reference count operations are atomic,
    allowing references to classes to be shared between threads.  Weak references are also
    guaranteed to be thread atomic, Copy-On-Write (🐮) types like Array and String are sharable,
    and the runtime provides some other basic guarantees.
    
    ### Goals and non-goals of this manifesto
    
    Concurrency is a broad and sweeping concept that can cover a wide range of topics.  To help
    scope this down a bit, here are some non-goals for this proposal:
    
     - We are focusing on task based concurrency, not data parallelism.  This is why we focus on
       GCD and threads as the baseline, while completely ignoring SIMD vectorization,
       data parallel for loops, etc.
     - In the systems programming context, it is important for Swift developers to have low-level
       opt-in access to something like the C or C++ memory consistency model.  This is definitely
       interesting to push forward, but is orthogonal to this work.
     - We are not discussing APIs to improve existing concurrency patterns (e.g. atomic integers,
       better GCD APIs, etc).
    
    So what are the actual goals?  Well, because it is already possible to express concurrent apps
    with GCD, our goal is to make the experience *far better than it is today* by appealing to the
    core values of Swift: we should aim to reduce the programmer time necessary to get from
    idea to a *working and efficient* implementation. In particular, we aim to improve the
    concurrency story in Swift along these lines:
    
     - Design: Swift should provide (just) enough language and library support for
       programmers to know what to reach for when a concurrent abstractions are
       needed.  There should be a structured "right" way to achieve most tasks.
     - Maintenance: The use of those abstractions should make Swift code easier to
       reason about.  For example, it is often difficult to know what data is
       protected by which GCD queue and what the invariants are for a heap based
       data structure.
     - Safety: Swift's current model provides no help for race conditions, deadlock
       and other concurrency problems.  Completion handlers can get called on a
       surprising queue.  These issues should be improved, and we would like to get
       to a "safe by default" programming model.
     - Scalability: Particularly in server applications, it is desirable to have
       hundreds of thousands of tasks that are active at a time (e.g. one for every
       active client of the server).
     - Performance:  As a stretch goal, it would be great to improve performance,
       e.g. by reducing the number of synchronization operations performed, and
       perhaps even reducing the need for atomic accesses on many ARC operations.
       The compiler should be aided by knowing how and where data can cross task
       boundaries.
     - Excellence: More abstractly, we should look to the concurrency models
       provided by other languages and frameworks, and draw together the best ideas
       from wherever we can get them, aiming to be better overall than any
       competitor.
     
    That said, it is absolutely essential that any new model coexists with existing
    concurrency constructs and existing APIs.  We cannot build a conceptually
    beautiful new world without also building a pathway to get existing apps into
    it.
    
    
    ### Why a first class concurrency model?
    
    It is clear that the multicore world isn't the future: it is the present! As such, it is
    essential for Swift to make it straight-forward for programmers to take
    advantage of hardware that is already prevalent in the world.  At the same time,
    it is already possible to write concurrent programs: since adding a concurrency model
    will make Swift more complicated, we need a strong justification for that complexity.
    To show opportunity for improvement, let's explore some of the pain that Swift
    developers face with the current approaches.  Here we focus on GCD since almost
    all Swift programmers use it.
    
    #### Asynchronous APIs are difficult to work with
    
    Modern Cocoa development involves a lot of asynchronous programming using closures and completion handlers, but these APIs are hard to use.  This gets particularly problematic when many asynchronous operations are used, error handling is required, or control flow between asynchronous calls is non-trivial.
    
    There are many problems in this space, including the "pyramid of doom" that frequently occurs:
    
    ```swift
    func processImageData1(completionBlock: (result: Image) -> Void) {
        loadWebResource("dataprofile.txt") { dataResource in
            loadWebResource("imagedata.dat") { imageResource in
                decodeImage(dataResource, imageResource) { imageTmp in
                    dewarpAndCleanupImage(imageTmp) { imageResult in
                        completionBlock(imageResult)
                    }
                }
            }
        }
    }
    ```
    
    Error handling is particularly ugly, because Swift's natural error handling mechanism cannot be used.  You end up with code like this:
    
    ```swift
    func processImageData2(completionBlock: (result: Image?, error: Error?) -> Void) {
        loadWebResource("dataprofile.txt") { dataResource, error in
            guard let dataResource = dataResource else {
                completionBlock(nil, error)
                return
            }
            loadWebResource("imagedata.dat") { imageResource, error in
                guard let imageResource = imageResource else {
                    completionBlock(nil, error)
                    return
                }
                decodeImage(dataResource, imageResource) { imageTmp, error in
                    guard let imageTmp = imageTmp else {
                        completionBlock(nil, error)
                        return
                    }
                    dewarpAndCleanupImage(imageTmp) { imageResult in
                        guard let imageResult = imageResult else {
                            completionBlock(nil, error)
                            return
                        }
                        return imageResult
                    }
                }
            }
        }
    }
    ```
    
    Partially because asynchronous APIs are onerous to use, there are many APIs defined in a synchronous form that can block (e.g. `UIImage(named: ...)`), and many of these APIs have no asynchronous alternative.  Having a natural and canonical way to define and use these APIs will allow them to become pervasive.  This is particularly important for new initiatives like the Swift on Server group.
    
    #### What queue am I on?
    
    Beyond being syntactically inconvenient, completion handlers are problematic because their
    syntax suggests that they will be called on the current queue, but that is not always the case.
    For example, one of the top recommendations on Stack Overflow is to implement your own
    custom async operations with code like this (Objective-C syntax):
    
    ```objective-c
    - (void)asynchronousTaskWithCompletion:(void (^)(void))completion;
    {
      dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
    
        // Some long running task you want on another thread
    
        dispatch_async(dispatch_get_main_queue(), ^{
          if (completion) {
            completion();
          }
        });
      });
    }
    ```
    
    Note how it is hard coded to call the completion handler on the main queue.  This is an
    insidious problem that can lead to surprising results and bugs like race conditions.  For
    example, since a lot of iOS code already runs on the main queue, you may have been using
    an API built like this with no problem.  However, a simple refactor to move that code to a
    background queue will introduce a really nasty problem where the code will queue hop
    implicitly - introducing subtle undefined behavior!
    
    There are several straight-forward ways to improve this situation like better documentation
    or better APIs in GCD.  However, the fundamental problem here is that there is no apparent
    linkage between queues and the code that runs on them.  This makes it difficult to design
    for, difficult to reason about and maintain existing code, and makes it more challenging to
    build tools to debug, profile, and reason about what is going wrong, etc.
    
    #### Shared mutable state is bad for software developers
    
    Lets define "Shared mutable state" first: "state" is simply data used by the program.  "Shared"
    means the data is shared across multiple tasks (threads, queues, or whatever other concurrency
    abstraction is used).  State shared by itself is not harmful: so long as no-one is modifying the
    data, it is no problem having multiple readers of that data.
    
    The concern is when the shared data is mutable, and therefore someone is changing it while
    others tasks are looking at it.  This opens an enormous can of worms that the software world has been
    grappling with for many decades now.  Given that there are multiple things looking at and
    changing the data, some sort of synchronization is required or else race conditions, semantic
    inconsistencies and other problems are raised.
    
    The natural first step to start with are mutexes or locks.  Without attempting to survey the
    full body of work
    around this, I'll claim that locking and mutexes introduce a number of problems: you need to
    ensure that data is consistently protected by the right locks (or else bugs and memory safety
    issues result), determine the granularity of locking, avoid deadlocks, and deal with many other
    problems.  There have been a number of attempts to improve this situation, notably
    `synchronized` methods in Java (which were later imported into Objective-C).  This sort of
    thing improves the syntactic side of the equation but doesn't fix the underlying problem.
    
    Once an app is working, you then run into performance problems, because mutexes are
    generally very inefficient - particularly when there are many cores and threads.  Given decades
    of experience with this model, there are a number of attempts to solve certain corners of the
    problem, including
    [readers-writer locks](https://en.wikipedia.org/wiki/Readers–writer_lock),
    [double-checked locking](https://en.wikipedia.org/wiki/Double-checked_locking), low-level
    [atomic operations](https://en.wikipedia.org/wiki/Linearizability#Primitive_atomic_instructions)
    and advanced techniques like
    [read/copy/update](https://en.wikipedia.org/wiki/Read-copy-update).  Each of these improves
    on mutexes in some respect, but the incredible complexity, unsafety, and fragility of the
    resulting model is itself a sign of a problem.
    
    With all that said, shared mutable state is incredibly important when you're working at the
    level of systems programming: e.g. if you're *implementing* the GCD API or a kernel in Swift,
    you absolutely must be able to have full ability to do this.  This is why it is ultimately important
    for Swift to eventually define an opt-in memory consistency model for Swift code.  While it is
    important to one day do this, doing so would be an orthogonal effort and thus is not the
    focus of this proposal.
    
    I encourage anyone interested in this space to read [Is
    Parallel Programming Hard, And, If So, What Can You Do About
    It?](https://www.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html).  It is
    a great survey developed by Paul E. McKenney who has
    been driving forward efforts to get the Linux kernel to scale to massively multicore
    machines (hundreds of cores).  Besides being an impressive summary of hardware characteristics
    and software synchronization approaches, it also shows the massive complexity creep that
    happens when you start to care a lot about multicore scalability with pervasively shared
    mutable state.
    
    #### Shared mutable state is bad for hardware
    
    On the hardware side of things, shared mutable state is problematic for a number of reasons.
    In brief, the present is pervasively multicore - but despite offering the ability to view these
    machines as shared memory devices, they are actually incredibly
    [NUMA / non-uniform](https://en.wikipedia.org/wiki/Non-uniform_memory_access).
    
    To oversimplify a bit, consider what happens when two different cores are trying to read and
    write the same memory data: the cache lines that hold that data are arbitrated by (e.g.) the
    [MESI protocol](https://en.wikipedia.org/wiki/MESI_protocol), which only allows a cache
    line to be mutable in a single processor's L1 cache.  Because of this, performance quickly
    falls off of a cliff: the cache line starts ping-pong'ing between the cores, and
    mutations to the cache line have to be pushed out to other cores that are simply reading it.
    
    This has a number of other knock on effects: processors have quickly moved to having
    [relaxed consistency models](https://en.wikipedia.org/wiki/Consistency_model) which make
    shared memory programming even more complicated.  Atomic accesses (and other
    concurrency-related primitives like compare/exchange) are now 20-100x slower than non-atomic
    accesses.  These costs and problems continue to scale with core count, yet it
    isn't hard to find a large machine with dozens or hundreds of cores today.
    
    If you look at the recent breakthroughs in hardware performance, they have come from
    hardware that has dropped the goal of shared memory.  Notably,
    [GPUs](https://en.wikipedia.org/wiki/Graphics_processing_unit) have been extremely
    successful at scaling to extremely high core counts, notably because they expose a
    programming model that encourages the use of fast local memory instead of shared global
    memory.  Supercomputers frequently use [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface)
    for explicitly managed memory transfers, etc.  If you explore this from first principles, the
    speed of light and wire delay become an inherently limiting factor for very large shared
    memory systems.
    
    The point of all of this is that it is highly desirable for Swift to move in a direction where Swift
    programs run great on large-scale multi-core machines.  With any luck, this could unblock the
    next step in hardware evolution.
    
    #### Shared mutable state doesn't scale beyond a single process
    
    Ok, it is somewhat tautological, but any model built on shared mutable state doesn't work
    in the absence of shared memory.
    
    Because of this, the software industry has a complexity explosion of systems for [interprocess
    communication](https://en.wikipedia.org/wiki/Inter-process_communication): things like
    [sockets, signals, pipes, MIG,
    XPC](https://www.mikeash.com/pyblog/friday-qa-2009-01-16.html), and many others.
    Operating systems then invariably
    introduce variants of the same abstractions that exist in a single process, including locks (file
    locking), shared mutable state (memory mapped files), etc.  Beyond IPC, [distributed
    computation](https://en.wikipedia.org/wiki/Distributed_computing)
    and cloud APIs then reimplement the same abstractions in yet-another way, because
    shared memory is impractical in that setting.
    
    The key observation here is simply that this is a really unfortunate state of
    affairs.  A better world would be for app developers to have a way to
    build their data abstractions, concurrency abstractions, and reason about their
    application in the large, even if it is running across multiple machines in a cloud ecosystem.
    If you want your single process app to start running in an IPC or distributed setting, you
    should only have to teach your types how to serialize/🐟 themselves, deal with new
    errors that can arise, then configure where you want each bit of code to run.  You shouldn't
    have to rewrite large parts of the application - certainly not with an entirely new technology
    stack.
    
    After all, app developers don't design their API with JSON as the input and output format
    for each function, so why should cloud developers?
    
    ## Overall vision
    
    This manifesto outlines several major steps to address these problems, which can be added
    incrementally to Swift over the span of years.  The first step is quite concrete, but subsequent
    steps get increasingly vague: this is an early manifesto and there is more design work to
    be done.  Note that the goal here is not to come up with inherently novel ideas, it is to pull
    together the best ideas from wherever we can get them, and synthesize those ideas into
    something self-consistent that fits with the rest of Swift.
    
    The overarching observation here is that there are four major abstractions in computation
    that are interesting to build a model on top of:
    
      - traditional control flow
      - asynchronous control flow
      - message passing and data isolation
      - distributed data and compute
    
    Swift already has a fully-developed model for the first point, incrementally refined and
    improved over the course of years, so we won't talk about it here.  It is important to observe
    that the vast majority of low-level computation benefits from imperative control flow,
    [mutation with value semantics](https://developer.apple.com/videos/play/wwdc2015/414/),
    and yes, reference semantics with classes.  These concepts are the important low-level
    primitives that computation is built on, and reflect the basic abstraction of CPUs.
    
    Asynchrony is the next fundamental abstraction that must be tackled in Swift, because it is
    essential to programming in the real world where we are talking to other machines, to slow
    devices (spinning disks are still a thing!), and looking to achieve concurrency between multiple
    independent operations.  Fortunately, Swift is not the first language to face
    these challenges: the industry as a whole has fought this dragon and settled on
    [async/await](https://en.wikipedia.org/wiki/Await) as the right abstraction.  We propose
    adopting this proven concept outright (with a Swift spin on the syntax).  Adopting
    async/await will dramatically improve existing Swift code, dovetailing with existing and
    future approaches to concurrency.
    
    The next step is to define a programmer abstraction to define and model the independent
    tasks in a program, as well as the data that is owned by those tasks.  We propose the
    introduction of a first-class [actor model](https://en.wikipedia.org/wiki/Actor_model), which
    provides a way to define and reason about independent tasks who communicate between
    themselves with asynchronous message sending.  The actor model has a deep history of
    strong academic work and was adopted and proven in
    [Erlang](https://www.erlang.org) and [Akka](http://akka.io), which successfully power a large
    number of highly scalable and reliable systems.
    With the actor model as a baseline, we believe we can achieve data isolation by ensuring that
    messages sent to actors do not lead to shared mutable state.
    
    Speaking of reliable systems, introducing an actor model is a good opportunity and excuse
    to introduce a mechanism for handling and partially recovering from runtime failures (like
    failed force-unwrap operations, out-of-bounds array accesses, etc).  We explore several
    options that are possible to implement and make a recommendation that we think will be a
    good for for UI and server applications.
    
    The final step is to tackle whole system problems by enabling actors to run in different
    processes or even on different machines, while still communicating asynchronously through
    message sends.  This can extrapolate out to a number of interesting long term possibilities,
    which we briefly explore.
    
    
    ## Part 1: Async/await
    
    NOTE: This section is concrete enough to have a [fully baked
    proposal](https://gist.github.com/lattner/429b9070918248274f25b714dcfc7619).  From a
    complexity perspective, it is plausible to get into Swift 5, we just need to determine whether
    it is desirable, then if so, debate and refine the proposal as a community.
    
    No matter what global concurrency model is settled on for Swift, it is hard to ignore the
    glaring problems we have dealing with asynchronous APIs.  Asynchronicity is unavoidable
    when dealing with independently executing systems: e.g. anything involving I/O (disks,
    networks, etc), a server, or even other processes on the same system.  It is typically "not ok"
    to block the current thread of execution just because something is taking a while to load.
    Asynchronicity also comes up when dealing with multiple independent operations that can
    be performed in parallel on a multicore machine.
    
    The current solution to this in Swift is to use "completion handlers" with closures.  These are
    [well understood](https://grokswift.com/completion-handlers-in-swift/) but also have a large
    number of well understood problems: they often stack up a pyramid of doom, make error
    handling awkward, and make control flow extremely difficult.
    
    There is a well-known solution to this problem, called
    [async/await](https://en.wikipedia.org/wiki/Await).  It is a popular programming style that
    was first introduced in C# and was later adopted in many other languages, including Python,
    Javascript, Scala, Hack, Dart, Kotlin ... etc.  Given its widespread success and acceptance
    by the industry, I suggest that we do the obvious thing and support this in Swift.
    
    ### async/await design for Swift
    
    The general design of async/await drops right into Swift, but a few tweaks makes it fit into
    the rest of Swift more consistently.  We suggest adding `async` as a function modifier akin
    to the existing `throws` function modifier.  Functions (and function types) can be declared as
    `async`, and this indicates that the function is a
    [coroutine](https://en.wikipedia.org/wiki/Coroutine).  Coroutines are functions that may return
    normally with a value, or may suspend themselves and internally return a continuation.
    
    This approach allows the completion handler to be absorbed into the language.  For example,
    before you might write:
    
    ```swift
    func loadWebResource(_ path: String, completionBlock: (result: Resource) -> Void) { ... }
    func decodeImage(_ r1: Resource, _ r2: Resource, completionBlock: (result: Image) -> Void)
    func dewarpAndCleanupImage(_ i : Image, completionBlock: (result: Image) -> Void)
    
    func processImageData1(completionBlock: (result: Image) -> Void) {
        loadWebResource("dataprofile.txt") { dataResource in
            loadWebResource("imagedata.dat") { imageResource in
                decodeImage(dataResource, imageResource) { imageTmp in
                    dewarpAndCleanupImage(imageTmp) { imageResult in
                        completionBlock(imageResult)
                    }
                }
            }
        }
    }
    ```
    
    whereas now you can write:
    
    ```swift
    func loadWebResource(_ path: String) async -> Resource
    func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
    func dewarpAndCleanupImage(_ i : Image) async -> Image
    
    func processImageData1() async -> Image {
        let dataResource  = await loadWebResource("dataprofile.txt")
        let imageResource = await loadWebResource("imagedata.dat")
        let imageTmp      = await decodeImage(dataResource, imageResource)
        let imageResult   = await dewarpAndCleanupImage(imageTmp)
        return imageResult
    }
    ````
    
    `await` is a keyword that works like the existing `try` keyword: it is a noop at runtime, but
    indicate to a maintainer of the code that non-local control flow can happen at that point.
    Besides the addition of the `await` keyword, the async/await model allows you to write
    obvious and clean imperative code, and the compiler handles the generation of state
    machines and callback handlers for you.
    
    Overall, adding this will dramatically improve the experience of working with completion
    handlers, and provides a natural model to compose futures and other APIs on top of.
    More details are contained in [the full
    proposal](https://gist.github.com/lattner/429b9070918248274f25b714dcfc7619).
    
    ### New asynchronous APIs
    
    The introduction of async/await into the language is a great opportunity to introduce more
    asynchronous APIs to Cocoa and perhaps even entire new framework extensions (like a revised
    asynchronous file I/O API).  The [Server APIs Project](https://swift.org/server-apis/) is also
    actively working to define new Swift APIs, many of which are intrinsically asynchronous.
    
    
    ## Part 2: Actors
    
    Given the ability define and use asynchronous APIs with expressive "imperative style" control
    flow, we now look to give developers a way to carve up their application into multiple
    concurrent tasks.  We propose adopting the model of
    [actors](https://en.wikipedia.org/wiki/Actor_model): Actors naturally represent real-world
    concepts like "a document", "a device", "a network request", and are particularly well suited
    to event driven architectures like UI applications, servers, device drivers, etc.
    
    So what is an actor?  As a Swift programmer, it is easiest to think of an actor as a
    combination of a `DispatchQueue`, the data that queue protects, and messages that can be
    run on that queue.  Because they are embodied by an (internal) queue abstraction, you
    communicate with Actors asynchronously, and actors guarantee that the data they protect is
    only touched by the code running on that queue.  This provides an "island of serialization
    in a sea of concurrency".
    
    It is straight-forward to adapt legacy software to an actor interface, and it is possible to
    progressively adopt actors in a system that is already built on top of GCD or other
    concurrency primitives.
    
    ### Actor Model Theory
    
    Actors have a deep theoretical basis and have been explored by academia since the 1970s -
    the [wikipedia page on actors](https://en.wikipedia.org/wiki/Actor_model) and the
    [c2 wiki page](http://wiki.c2.com/?ActorsModel) are good places
    to start reading if you'd like to dive into some of the theoretical fundamentals that back the
    model.  A challenge of this work (for Swift's purposes) is that academia assumes a pure actor
    model ("everything is an actor"), and assumes a model of communication so limited that it
    may not be acceptable for Swift.  I'll provide a broad stroke summary of the advantages of
    this pure model, then talk about how to address the problems.
    
    As Wikipedia says:
    
    > In response to a message that it receives, an actor can: make local decisions, create more
    > actors, send more messages, and determine how to respond to the next message received.
    > Actors may modify private state, but can only affect each other through messages (avoiding
    > the need for any locks).
    
    Actors are cheap to construct and you communicate with an actor using efficient
    unidirectional asynchronous message sends ("posting a message in a mailbox").
    Because these messages are unidirectional, there is no waiting, and thus deadlocks are
    impossible.  In the academic model, all data sent in these messages is deep copied, which
    means that there is no data sharing possible between actors.  Because actors cannot touch
    each other's state (and have no access to global state), there is no need for any
    synchronization constructs, eliminating all of the problems with shared mutable state.
    
    To make this work pragmatically in the context of Swift, we need to solve several problems:
    
    - we need a strong computational foundation for all the computation within a task.  Good
      news: this is already done in Swift 1...4!
    - unidirectional async message sends are great, but inconvenient for some things.  We want
      a model that allows messages to return a value (even if we encourage them not to), which
      requires a way to wait for that value. This is the point of adding async/await.
    - we need to make message sends efficient: relying on a deep copy of each argument is not
      acceptable.  Fortunately - and not accidentally - we already have Copy-On-Write (🐮) value
      types and [move semantics](https://github.com/apple/swift/blob/master/docs/OwnershipManifesto.md)
      on the way as a basis to build from.  The trick is dealing with reference types, which are
      discussed below.
    - we need to figure out what to do about global mutable state, which already exists in Swift.
      One option is considered below.
      
    ### Example actor design for Swift
    
    There are several possible ways to manifest the idea of actors into Swift.  For the purposes of
    this manifesto, I'll describe them as a new type in Swift because it is the least confusing way
    to explain the ideas and this isn't a formal proposal.  I'll note right here up front that this is
    only one possible design: the right approach may be for actors to be a special kind of class,
    a model described below.
    
    With this design approach, you'd define an actor with the `actor` keyword.  An actor can
    have any number of data members declared as instance members, can have normal methods,
    and extensions work with them as you'd expect.  Actors are reference types and have an
    identity which can be passed around as a value.  Actors can conform to protocols and
    otherwise dovetail with existing Swift features as you'd expect.
    
    We need a simple running example, so lets imagine we're building the data model for an app
    that has a tableview with a list of strings.  The app has UI to add and manipulate the list.  It
    might look something like this:
    
    ```swift
      actor TableModel {
        let mainActor : TheMainActor
        var theList : [String] = [] {
          didSet {
            mainActor.updateTableView(theList)
          }
        }
        
        init(mainActor: TheMainActor) { self.mainActor = mainActor }
    
        // this checks to see if all the entries in the list are capitalized:
        // if so, it capitalize the string before returning it to encourage
        // capitalization consistency in the list.
        func prettify(_ x : String) -> String {
          // ... details omitted, it just pokes theList directly ...
        }
    
        actor func add(entry: String) {
          theList.append(prettify(entry))
        }
      }
    ```
    
    This illustrates the key points of an actor model:
    
    - The actor defines the state local to it as instance data, in this case the reference to
       `mainActor` and `theList` is the data in the actor.
    - Actors can send messages to any other actor they have a reference to, using traditional
      dot syntax.
    - Normal (non-actor) methods can be defined on the actor for convenience, and
      they have full access to the state within their `self` actor.
    - `actor` methods are the messages that actors accept.  Marking a method as `actor`
      imposes certain restrictions upon it, described below.
    - It isn't shown in the example, but new instances of the actor are created by using the
      initializer just like any other type: `let dataModel = TableModel(mainActor)`.
    - Also not shown in the example, but `actor` methods are implicitly `async`, so they can
      freely call `async` methods and `await` their results.
    
    It has been found in other actor systems that an actor abstraction like this encourage the
    "right" abstractions in applications, and map well to the conceptual way that programmers
    think about their data.  For example, given this data model it is easy to create multiple
    instances of this actor, one for each document in an MDI application.
    
    This is a straight-forward implementation of the actor model in Swift and is enough to achieve
    the basic advantages of the model.  However, it is important to note that there are a number
    of limitations being imposed here that are not obvious, including:
    
    - An `actor` method cannot return a value, throw an error, or have an `inout` parameter.
    - All of the parameters must produce independent values when copied (see below).
    - Local state and non-`actor` methods may only be accessed by methods defined lexically
      on the actor or in an extension to it (whether they are marked `actor` or otherwise).
    
    ### Extending the model through await
    
    The first limitation (that `actor` methods cannot return values) is easy to address as we've
    already discussed.  Say the app developer needs a quick way to get the number of entries in
    the list, a way that is visible to other actors they have running around.  We should simply
    allow them to define:
    
    ```swift
      extension TableModel {
        actor func getNumberOfEntries() -> Int {
          return theList.count
        }
      }
    ````
    
    This allows them to await the result from other actors:
    
    ```swift
      print(await dataModel.getNumberOfEntries())
    ```
    
    This dovetails perfectly with the rest of the async/await model.  It is unrelated to this
    manifesto, but we'll observe that it would be more idiomatic way to
    define that specific example is as an `actor var`.  Swift currently doesn't allow property
    accessors to `throw` or be `async`.  When this limitation is relaxed, it would be
    straight-forward to allow `actor var`s to provide the more natural API.
    
    Note that this extension makes the model far more usable in cases like this, but erodes the
    "deadlock free" guarantee of the actor model.  Continuing the analogy that each actor is
    backed by a GCD queue, an await on an `actor` method becomes analogous to calling
    `dispatch_sync` on that queue.  Because only one message is processed by the actor at a
    time, if an actor waits on itself directly (possibly through a chain of references) a deadlock will
    occur - in exactly the same way as it happens with `dispatch_sync`:
    
    ```swift
      extension TableModel {
        actor func f() {
           ...
           let x = await self.getNumberOfEntries()   // trivial deadlock.
           ...
        }
      }
    ```
    
    The trivial case like this can also be trivially diagnosed by the compiler.  The complex case
    would ideally be diagnosed at runtime with a trap, depending on the runtime implementation
    model.
    
    The solution for this is to encourage people to use `Void`-returning `actor` methods that "fire
    and forget".  There are several reasons to believe that these will be the most common: the
    async/await model described syntactically encourages people not to use it (by requiring
    marking, etc), many of the common applications of actors are event-driven applications
    (which are inherently one way), the eventual design of UI and other system frameworks
    can encourage the right patterns from app developers, and of course documentation can
    describe best practices.
    
    ### About that main thread
    
    The example above shows `mainActor` being passed in, following theoretically pure actor
    hygiene.  However, the main thread in UIKit and AppKit are already global state, so we might
    as well admit that and make code everywhere nicer.  As such, it makes sense for AppKit and
    UIKit to define and vend a public global constant actor reference, e.g. something like this:
    
    ```swift
    public actor MainActor {  // Bikeshed: could be named "actor UI {}"
       private init() {}      // You can't make another one of these.
       // Helpful public stuff could be put here to make app developers happy. :-)
    }
    public let mainActor = MainActor()
    ```
    
    This would allow app developers to put their extensions on `MainActor`, making their code
    more explicit and clear about what *needs* to be run on the main thread.  If we got really
    crazy, someday Swift should allow data members to be defined in extensions on classes,
    and App developers would then be able to put their state that must be manipulated on the
    main thread directly on the MainActor.
    
    ### Data isolation
    
    The way that actors eliminate shared mutable state and explicit synchronization is through
    deep copying all of the data that is passed to an actor in a message send, and preventing
    direct access to actor state without going through these message sends.  This all composes
    nicely, but can quickly introduce inefficiencies in practice because of all the data copying
    that happens.
    
    Swift is well positioned to deal with this for a number of reasons: its strong focus on value
    semantics means that copying of these values is a core operation understood and known by
    Swift programmers everywhere.  Second, the use of Copy-On-Write (🐮) as an
    implementation approach fits perfectly with this model.  Note how, in the example above,
    the DataModel actor sends a copy of the `theList` array back to the UI thread so it can
    update itself.  In Swift, this is a super efficient O(1) operation that does some ARC stuff: it
    doesn't actually copy or touch the elements of the array.
    
    The third piece, which is still in development, will come as a result of the work on adding
    [ownership semantics](https://github.com/apple/swift/blob/master/docs/OwnershipManifesto.md)
    to Swift.  When this is available, advanced programmers will have the ability to *move*
    complex values between actors, which is typically also a super-efficient O(1) operation.
    
    This leaves us with three open issues: 1) how do we know whether something has proper
    value semantics, 2) what do we do about reference types (classes and closures), and 3) what
    do we do about global state.  All three of these options should be explored in detail, because
    there are many different possible answers to these. I will explore a simple model below in
    order to provide an existence proof for a design, but I do not claim that it is the best model
    we can find.
    
    #### Does a type provide proper value semantics?
    
    This is something that many many Swift programmers have wanted to be able to know the
    answer to, for example when defining generic algorithms that are only correct in the face of
    proper value semantics.  There have been numerous proposals for how to determine this,
    and I will not attempt to summarize them, instead I'll outline a simple proposal just to provide
    an existence proof for an answer:
    
    - Start by defining a simple marker protocol (the name of which is intentionally silly to reduce
      early bikeshedding) with a single requirement:
      `protocol ValueSemantical { func valueSemanticCopy() -> Self }`
    - Conform all of the applicable standard library types to `ValueSemantical`.  For example,
      Array conforms when its elements conform - note that an array of reference types doesn't
      always provide the semantics we need.
    - Teach the compiler to synthesize conformance for structs and enums whose members are
      all `ValueSemantical`, just like we do for `Codable`.
    - The compiler just checks for conformance to the `ValueSemantical` protocol and
      rejects any arguments and return values that do not conform.
    
    The reiterate, the name `ValueSemantical` really isn't the right name for this: things like
    `UnsafePointer`, for example, shouldn't conform.  Enumerating the possible options and
    evaluating the naming tradeoffs between them is a project for another day though.
    
    It is important to realize that this design does *not guarantee memory safety*.  Someone
    could implement the protocol in the wrong way (thus lying about satisfying the requirements)
    and shared mutable state could occur.  In the author's opinion, this is the right tradeoff:
    solving this would require introducing onerous type system mechanics (e.g. something like
    the capabilities system in the [Pony](https://www.ponylang.org/) language).  Swift already
    provides a model where memory safe APIs (e.g. `Array`) are implemented in terms of memory
    unsafety (e.g. `UnsafePointer`), the approach described here is directly analogous.
    
    *Alternate Design*: Another approach is to eliminate the requirement from the protocol:
    just use the protocol as a marker, which is applied to types that already have the right
    behavior.  When it is necessary to customize the copy operation (e.g. for a reference type),
    the solution would be to box values of that type in a struct that provides the right value
    semantics.  This would make it more awkward to conform, but this design eliminates having
    "another kind of copy" operation, and encourages more types to provide value semantics.
    
    #### Reference types: Classes
    
    The solution to this is simple: classes need to conform to `ValueSemantical` (and
    implement the requirement) properly, or else they cannot be passed as a parameter or result
    of an `actor` method.  In the author's opinion, giving classes proper value semantics will not
    be that big of a deal in practice for a number of reasons:
    
    - A number of classes in Cocoa are already semantically immutable, making it trivial and
      cheap for them to conform.
    - The default (non-conformance) is the right default: the only classes that conform will be
      ones that a human thought about.
    - Retroactive conformance allows app developers to handle cases not addressed by the
      framework engineers.
    - Cocoa has a number of classes (e.g. the entire UI frameworks) that are only usable on the
      main thread.  By definition, these won't get passed around.
    
    Beyond that, when you start working with an actor system, it is an inherent part of the
    application design that you don't allocate and pass around big object graphs: you allocate
    them in the actor you intend to manipulate them with.  This is something that has been
    found true in Scala/Akka for example.
    
    #### Reference types: Closures and Functions
    
    It is not safe to pass an arbitrary value with function type across an actor message,
    because it could close over arbitrary actor-local data.  If that data is closed over
    by-reference, then the recipient actor would have arbitrary access to data in the sending
    actor's state.  That said, there is at least one important exception that we should carve
    out: it is safe to pass a closure *literal* when it is known that it only closes over
    data by copy: using the same `ValueSemantical` copy semantics described above.
    
    This happens to be an extremely useful carveout, because it permits some interesting "callback"
    abstractions to be naturally expressed without tight coupling between actors.  Here is a silly
    example:
    
    ```swift
        otherActor.doSomething { self.incrementCount($0) }
    ```
    
    In this case `OtherActor` doesn't have to know about `incrementCount` which is defined
    on the self actor, reducing coupling between the actors.
    
    #### Global mutable state
    
    Since we're friends, I'll be straight with you: there are no great answers here.  Swift and C
    already support global mutable state, so the best we can do is discourage the use of it.  We
    cannot automatically detect a problem because actors need to be able to transitively use
    random code that isn't defined on the actor.  For example:
    
    ```swift
    func calculate(thing : Int) -> Int { ... }
    
    actor Foo {
      actor func exampleOperation() {
         let x = calculate(thing: 42)
         ...
      }
    }
    ```
    
    There is no practical way to know whether 'calculate' is thread-safe or not.  The only solution
    is to scatter tons of annotations everywhere, including in headers for C code.  I think that
    would be a non-starter.
    
    In practice, this isn't as bad as it sounds, because the most common operations
    that people use (e.g. `print`) are already internally synchronizing, largely because people are
    already writing multithreaded code.  While it would be nice to magically solve this long
    standing problem with legacy systems, I think it is better to just completely ignore it and tell
    developers not to define or use global variables (global `let`s are safe).
    
    All hope is not lost though: Perhaps we could consider deprecating global `var`s from Swift
    to further nudge people away from them. Also, any accesses to unsafe global global mutable
    state from an actor context can and should be warned about.  Taking some steps like this
    should eliminate the most obvious bugs.
    
    ### Scalable Runtime
    
    Thus far, we've dodged the question about how the actor runtime should be implemented.
    This is intentional because I'm not a runtime expert!  From my perspective, GCD is a
    reasonable baseline to start from: it provides the right semantics, it has good low-level
    performance, and it has advanced features like Quality of Service support which are just as
    useful for actors as they are for anything else.  It would be easy to provide access to these
    advanced features by giving every actor a `gimmeYourQueue()` method.
    
    The one problem I anticipate with GCD is that it doesn't scale well enough: server developers
    in particular will want to instantiate hundreds of thousands of actors in their application, at
    least one for every incoming network connection.  The programming model is substantially
    harmed when you have to be afraid of creating too many actors: you have to start
    aggregating logically distinct stuff together to reduce # queues, which leads to complexity
    and loses some of the advantages of data isolation.
    
    There are also questions about how actors are shut down.  The conceptually ideal model is
    that actors are implicitly released when their reference count drops to zero and when the last
    enqueued message is completed.  This will probably require some amount of runtime
    integration.
    
    Another potential concern is that GCD queues have unbounded depth: if you have a
    producer/consumer situation, a fast producer can outpace the consumer and continuously
    grow the queue of work.  It would be interesting to investigate options for
    providing bounded queues that throttle or block the producer in this sort of situation.
    
    ### Alternative Design: Actors as classes
    
    The design above is simple and self consistent, but may not be the right model, because
    actors have a ton of conceptual overlap with classes.  Observe:
    
    - Actors have reference semantics, just like classes.
    - Actors form a graph, this means that we need to be able to have `weak`/`unowned`
      references to them.
    - Subclassing of actors makes just as much sense as subclassing of classes, and would
      work the same way.
    - Some people incorrectly think that Swift hates classes: this is an opportunity to restore
      some of their former glory.
    
    However, actors are not *simple classes*: here are some differences:
    
    - Only actors can have `actor` methods on them.  These methods have additional
      requirements put on them in order to provide the safety in the programming model we seek.
    - An "actor class" deriving from a "non-actor base class" would have to be illegal, because
      the base class could escape self or escape local state references in an unsafe way.
    
    One important pivot-point in discussion is whether subclassing of actors is desirable.  If so,
    modeling them as a special kind of class would be a very nice simplifying assumption,
    because a lot of complexity comes in with that (including all the initialization rules etc).  If not,
    then defining them as a new kind of type is defensible, because they'd be very simple and
    being a separate type would more easily explain the additional rules imposed on them.
    
    Syntactically, if we decided to make them classes, it makes sense for this to be a modifier
    on the class definition itself, since actorhood fundamentally alters the contract of the class,
    e.g.:
    
    ```swift
    actor class DataModel : SomeBaseActor { ... }
    ```
    
    
    #### Examples
    
    NOTE: This section should be expanded to show some of the more common design patterns
    so people have more of an intuitive feel of how things work.  Suggestions are welcome!
    
    ## Part 3: Reliability through fault isolation
    
    Swift has many aspects of its design that encourages programmer errors (aka software
    bugs :-) to be caught at compile time: a static type system, optionals, encouraging covered
    switch cases, etc.  However, some errors may only be caught at runtime, including things like
    out-of-bound array accesses, integer overflows, and force-unwraps of nil.
    
    As described in the [Swift Error Handling
    Rationale](https://github.com/apple/swift/blob/master/docs/ErrorHandlingRationale.rst), there
    is a tradeoff that must be struck: it doesn't make sense to force programmers to write logic
    to handle every conceivable edge case: even discounting the boilerplate that would generate,
    that logic is likely to itself be poorly tested and therefore full of bugs.  We must carefully
    weigh and tradeoff complex issues in order to get a balanced design.  These tradeoffs are
    what led to Swift's approach that does force programmers to think about and write code to
    handle all potentially-nil pointer references, but not to have to think about integer overflow on
    every arithmetic operation.  The new challenge is that integer overflow still must be
    detected and handled somehow, and the programmer hasn't written any recovery code.
    
    Swift handles these with a [fail fast](https://en.wikipedia.org/wiki/Fail-fast) philosophy: it is
    preferable to detect and report a programmer error as quickly as possible, rather than
    "blunder on" with the hope that the error won't matter.  Combined with rigorous testing (and
    perhaps static analysis technology in the future), the goal is to make bugs shallow, and provide
    good stack traces and other information when they occur.  This encourages them to be found
    and fixed quickly early in the development cycle.  However, when the app ships, this
    philosophy is only great if all the bugs were actually found, because an undetected problem
    causes the app to suddenly terminate itself.
    
    Sudden termination of a process is hugely problematic if it jeopardizes user data, or - in the
    case of a server app - if there are hundreds of clients currently connected to the server at the
    time.  While it is impossible in general to do perfect resolution of an arbitrary programmer
    error, there is prior art for how handle common problems gracefully.  In the case of Cocoa,
    for example, if an `NSException` propagates up to the top of the runloop, it is useful to try to
    save any modified documents to a side location to avoid losing data.  This isn't guaranteed
    to work in every case, but when it does, the
    user is very happy that they haven't lost their progress.  Similarly, if a server crashes
    handling one of its client's requests, a reasonable recovery scheme is to finish handling the
    other established connections in the current process, but push off new connection requests
    to a restarted instance of the server process.
    
    The introduction of actors is a great opportunity to improve this situation, because actors
    provide an interesting granularity level between the "whole process" and "an individual class"
    where programmers think about the invariants they are maintaining.  Indeed, there is a bunch
    of prior art in making reliable actor systems, and again, Erlang is one of the leaders.  We'll
    start by sketching the basic model, then talk about a potential design approach.
    
    ### Actor Reliability Model
    
    The basic concept here is that an actor that fails has violated its own local invariants, but that
    the invariants in other actors still hold: this because we've defined away shared
    mutable state.  This gives us the option of killing the individual actor that broke its invariants
    instead of taking down the entire process.  Given the definition of the basic actor model
    with unidirectional async message sends, it is possible to have the runtime just drop any new
    messages sent to the actor, and the rest of the system can continue without even knowing
    that the actor crashed.
    
    While this is a simple approach, there are two problems:
    
    - Actor methods that return a value could be in the process of being `await`ed, but if the
      actor has crashed those awaits will never complete.
    - Dropping messages may itself cause deadlock because of higher-level communication
      invariants that are broken.  For example, consider this actor, which waits for 10 messages
      before passing on the message:
      
    ```swift
      actor Merge10Notifications {
        var counter : Int = 0
        let otherActor = ...  // set up by the init.
        actor func notify() {
          counter += 1
          if counter >= 10 {
            otherActor.notify()
          }
        }
      }
    ```
    
    If one of the 10 actors feeding notifications into this one crashes, then the program will wait
    forever to get that 10th notification.  Because of this, someone designing a "reliable" actor
    needs to think about more issues, and work slightly harder to achieve that reliability.
    
    ### Opting into reliability
    
    Given that a reliable actor requires more thought than building a simple actor, it is reasonable
    to look for opt-in models that provide [progressive disclosure of
    complexity](https://en.wikipedia.org/wiki/Progressive_disclosure).  The first thing
    you need is a way to opt in.  As with actor syntax in general, there are two
    broad options: first-class actor syntax or a class declaration modifier, i.e., one of:
    
    ```swift
      reliable actor Notifier { ... }
      reliable actor class Notifier { ... }
    ```
    
    When one opts an actor into caring about reliability, a new requirement is imposed on all
    `actor` methods that return a value: they are now required to be declared `throws` as well.
    This forces clients of the actor to be prepared for a failure when/if the actor crashes.
    
    Implicitly dropping messages is still a problem.  I'm not familiar with the approaches taken in
    other systems, but I imagine two potential solutions:
    
    1) Provide a standard library API to register failure handlers for actors, allowing higher level
       reasoning about how to process and respond to those failures.  An actor's `init()` could
       then use this API to register its failure handler the system.
    2) Force *all* `actor` methods to throw, with the semantics that they only throw if the actor
       has crashed.  This forces clients of the reliable actor to handle a potential crash, and do so
       on the granularity of all messages sent to that actor.
      
    Between the two, the first approach is more appealing to me, because it allows factoring
    out the common failure logic in one place, rather than having every caller have to write (hard
    to test) logic to handler the failure in a fine grained way.  For example, a document actor could
    register a failure handler that attempts to save its data in a side location if it ever crashes.
    
    That said, both approaches are feasible and should be explored in more detail.
    
    *Alternate design*: An alternate approach is make all actors be "reliable" actors, by making
    the additional constraints a simple part of the actor model.  This reduces the number of
    choices a Swift programmer gets-to/has-to make.  If the async/await model ends up making
    async imply throwing, then this is probably the right direction, because the `await` on a value
    returning method would be implicitly a `try` marker as well.
    
    ### Reliability runtime model
    
    Besides the high level semantic model that the programmer faces, there are also questions
    about what the runtime model is.  When an actor crashes:
    
     - What state is its memory left in?
     - How well can the process clean up from the failure?
     - Do we attempt to release memory and other resources (like file descriptors) managed by that actor?
    
    There are multiple possible designs, but I
    advocate for a design where **no cleanup is performed**: if an actor crashes, the runtime
    propagates that error to other actors and runs any recovery handlers (as described in the
    previous section) but that it **should not** attempt further clean up the resources owned by
    the actor.
    
    There are a number of reasons for this, but the most important is that the failed actor just
    violated its own consistency with whatever invalid operation it attempted to perform.  At this
    point, it may have started a transaction but not finished it, or may be in any other sort of
    inconsistent or undefined state.  Given the high likelihood for internal inconsistency, it is
    probable that the high-level invariants of various classes aren't intact, which means it isn't
    safe to run the `deinit`-ializers for the classes.
    
    Beyond the semantic problems we face, there are also practical complexity and efficiency
    issues at stake: it takes code and metadata to be able to unwind the actor's stack and release
    active resources.  This code and metadata takes space in the application, and it also takes
    time at compile time to generate it.  As such, the choice to provide a model that attempted
    to recover from these
    sorts of failures would mean burning significant code size and compile time for something
    that isn't supposed to happen.
    
    A final (and admittedly weak) reason for this approach is that a "too clean" cleanup runs the
    risk that programmers will start treating fail-fast conditions as a soft error that
    doesn't need to be handled with super-urgency.  We really do want these bugs to be found
    and fixed in order to achieve the high reliability software systems that we seek.
    
    ## Part 4: Improving system architecture
    
    As described in the motivation section, a single application process runs in the context of a
    larger system: one that often involves multiple processes (e.g. an app and an XPC daemon)
    communicating through [IPC](https://www.mikeash.com/pyblog/friday-qa-2009-01-16.html),
    clients and servers communicating through networks, and
    servers communicating with each other in "[the cloud](https://tr4.cbsistatic.com/hub/i/r/2016/11/29/9ea5f375-d0dd-4941-891b-f35e7580ae27/resize/770x/982bcf36f7a68242dce422f54f8d445c/49nocloud.jpg)" (using
    JSON, protobufs, GRPC, etc...).  The points
    of similarity across all of these are that they mostly consist of independent tasks that
    communicate with each other by sending structured data using asynchronous message
    sends, and that they cannot practically share mutable state.  This is starting to sound familiar.
    
    That said, there are differences as well, and attempting to papering over them (as was done
    in the older Objective-C "[Distributed
    Objects](https://www.mikeash.com/pyblog/friday-qa-2009-02-20-the-good-and-bad-of-distributed-objects.html)" system)
    leads to serious problems:
    
    - Clients and servers are often written by different entities, which means that APIs must be
    able to evolve independently.  Swift is already great at this.
    - Networks introduce new failure modes that the original API almost certainly did not
      anticipate.  This is covered by "reliable actors" described above.
    - Data in messages must be known-to-be `Codable`.
    - Latency is much higher to remote systems, which can impact API design because
      too-fine-grained APIs perform poorly.
    
    In order to align with the goals of Swift, we cannot sweep these issues under the rug: we
    want to make the development process fast, but "getting something up and running" isn't the
    goal: it really needs to work - even in the failure cases.
    
    ### Design sketch for interprocess and distributed compute
    
    The actor model is a well-known solution in this space, and has been deployed
    successfully in less-mainstream languages like
    [Erlang](https://en.wikipedia.org/wiki/Erlang_(programming_language)#Concurrency_and_distribution_orientation).
    Bringing the ideas to Swift just requires that we make sure it fits cleanly into the existing
    design, taking advantage of the characteristics of Swift and ensuring that it stays true to the
    principles that guide it.
    
    One of these principles is the concept of [progressive disclosure of
    complexity](https://en.wikipedia.org/wiki/Progressive_disclosure): a Swift developer
    shouldn't have to worry about IPC or distributed compute if they don't care about it.  This
    means that actors should opt-in through a new declaration modifier, aligning with the ultimate
    design of the actor model itself, i.e., one of:
    
    ```swift
      distributed actor MyDistributedCache { ... }
      distributed actor class MyDistributedCache { ... }
    ```
    
    Because it has done this, the actor is now subject to two additional requirements.
    
     - The actor must fulfill the requirements of a `reliable actor`, since a
       `distributed actor` is a further refinement of a reliable actor.  This means that all
       value returning `actor` methods must throw, for example.
     - Arguments and results of `actor` methods must conform to `Codable`.
     
     In addition, the author of the actor should consider whether the `actor` methods make
     sense in a distributed setting, given the increased latency that may be faced.  Using coarse
     grain APIs could be a significant performance win.
     
    With this done, the developer can write their actor like normal: no change of language or
    tools, no change of APIs, no massive new conceptual shifts.  This is true regardless of
    whether you're talking to a cloud service endpoint over JSON or an optimized API using
    protobufs and/or GRPC.  There are very few cracks that appear in the model, and the ones
    that do have pretty obvious reasons: code that mutates global
    state won't have that visible across the entire application architecture, files created in the file
    system will work in an IPC context, but not a distributed one, etc.
    
    The app developer can now put their actor in a package, share it between their app and their
    service.  The major change in code is at the allocation site of `MyDistributedCache`, which
    will now need to use an API to create the actor in another process instead of calling its
    initializer directly.  If you want to start using a standard cloud API, you should be able to
    import a package that vends that API as an actor interface, allowing you to completely
    eliminate your code that slings around JSON blobs.
    
    ### New APIs required
    
    The majority of the hard part of getting this to work is on the framework side, for example,
    it would be interesting to start building things like:
    
    - New APIs need to be built to start actors in interesting places: IPC contexts, cloud
      providers, etc.  These APIs should be consistent with each other.
    - The underlying runtime needs to be built, which handles the serialization, handshaking,
      distributed reference counting of actors, etc.
    - To optimize IPC communications with shared memory (mmaps), introduce a new protocol
      that refines `ValueSemantical`.  Heavy weight types can then opt into using it where it
      makes sense.
    - A DSL that describes cloud APIs should be built (or an existing one adopted) to
      autogenerate the boilerplate necessary to vend an actor API for a cloud service.
    
    In any case, there is a bunch of work to do here, and it will take multiple years to prototype,
    build, iterate, and perfect it.  It will be a beautiful day when we get here though.
    
    ## Part 5: The crazy and brilliant future
    
    Looking even farther down the road, there are even more opportunities to eliminate
    accidental complexity by removing arbitrary differences in our language, tools, and APIs.
    You can find these by looking for places with asynchronous communications patterns,
    message sending and event-driven models, and places where shared mutable state doesn't
    work well.
    
    For example, GPU compute and DSP accelerators share all of these characteristics: the
    CPU talks to the GPU through asynchronous commands (e.g. sent over DMA requests and
    interrupts).  It could make sense to use a subset of Swift code (with new APIs for GPU
    specific operations like texture fetches) for GPU compute tasks.
    
    Another place to look is event-driven applications like interrupt handlers in embedded
    systems, or asynchronous signals in Unix.  If a Swift script wants to sign up for notifications
    about `SIGWINCH`, for example, it should be easy to do this by registering your actor and
    implementing the right method.
    
    Going further, a model like this begs for re-evaluation of some long-held debates in the software
    community, such as the divide between microkernels and monolithic kernels.  Microkernels
    are generally considered to be academically better (e.g. due to memory isolation of different
    pieces, independent development of drivers from the kernel core, etc), but monolithic kernels
    tend to be more pragmatic (e.g. more efficient).  The proposed model allows some really
    interesting hybrid approaches, and allows subsystems to be moved "in process" of the main
    kernel when efficiency is needed, or pushed "out of process" when they are untrusted or
    when reliability is paramount, all without rewriting tons of code to achieve it.  Swift's focus on
    stable APIs and API resilience also encourages and enables a split between the core kernel
    and driver development.
    
    In any case, there is a lot of opportunity to make the software world better, but it is also a
    long path to carefully design and build each piece in a deliberate and intentional way.  Let's
    take one step at a time, ensuring that each is as good as we can make it.
    
    # Learning from other concurrency designs
    
    When designing a concurrency system for Swift, we should look at the designs of other
    languages to learn from them and ensure we have the best possible system.  There are
    thousands of different programming languages, but most have very small communities, which
    makes it hard to draw practical lessons out from those communities.  Here we look at a few
    different systems, focusing on how their concurrency design works, ignoring syntactic and
    other unrelated aspects of their design.
    
    ### Pony
    
    Perhaps the most relevant active research language is the [Pony programming
    language](https://www.ponylang.org).  It is actor-based and uses them along with other techniques
    to provide a type-safe, memory-safe, deadlock-free, and datarace-free programming model.
    The biggest
    semantic difference between the Pony design and the Swift design is that Pony invests a
    lot of design complexity into providing [capability-based
    security](https://en.wikipedia.org/wiki/Capability-based_security), which impose a high
    learning curve.  In contrast, the model proposed here builds on Swift's mature system of
    value semantics.  If transferring object graphs between actors (in a guaranteed memory safe
    way) becomes important in the future, we can investigate expanding the [Swift Ownership
    Model](https://github.com/apple/swift/blob/master/docs/OwnershipManifesto.md) to
    cover more of these use-cases.
    
    
    ### Akka Actors in Scala
    
    [Akka](http://akka.io) is a framework written in the [Scala programming
    language](https://www.scala-lang.org), whose mission is to "Build powerful reactive,
    concurrent, and distributed applications more easily".  The key to this is their well developed
    [Akka actor system](http://doc.akka.io/docs/akka/current/scala/actors.html), which is the
    principle abstraction that developers use to realize these goals (and it, in turn, was heavily
    influenced by [Erlang](https://www.erlang.org).  One of the great things about
    Akka is that it is mature and widely used by a lot of different organizations and people.  This
    means we can learn from its design, from the design patterns the community has explored,
    and from experience reports describing how well it works in practice.
    
    The Akka design shares a lot of similarities to the design proposed here, because it is an
    implementation of the same actor model.  It is built on futures, asynchronous message sends,
    each actor is a unit of concurrency, there are well-known patterns for when and how actor
    should communicate, and Akka supports easy distributed computation (which they call
    "location transparency").
    
    One difference between Akka and the model described here is that Akka is a library feature,
    not a language feature.  This means that it can't provide additional type system and safety
    features that the model we describe does.  For example, it is possible to accidentally [share
    mutable state](https://manuel.bernhardt.io/2016/08/02/akka-anti-patterns-shared-mutable-state/)
    which leads to bugs and erosion of the model.  Their message loops are also manually written
    loops with pattern matching, instead of being automatically dispatched to `actor` methods -
    this leads to somewhat more boilerplate.  Akka actor messages are untyped (marshalled
    through an Any), which can lead to surprising bugs and difficulty reasoning about what the
    API of an actor is.  Beyond that though, the two models are very comparable - and, no, this
    is not an accident.
    
    Keeping these differences in mind, we can learn a lot about how well the model works in
    practice, by reading the numerous blog posts and other documents available online,
    including, for example:
     - Lots of [Tutorials](http://danielwestheide.com/blog/2013/02/27/the-neophytes-guide-to-scala-part-14-the-actor-approach-to-concurrency.html)
    - [Best practices and design patterns](https://www.safaribooksonline.com/library/view/applied-akka-patterns/9781491934876/ch04.html)
    - Descriptions of the ease and benefits of [sharding servers written in Akka](http://michalplachta.com/2016/01/23/scalability-using-sharding-from-akka-cluster/)
    - Success reports from lots of folks.
    
    Further, it is likely that some members of the Swift community have encountered this
    model, it would be great if they share their experiences, both positive and negative.
    
    ### Go
    
    The [Go programming language](https://golang.org) supports a first-class approach to
    writing concurrent programs based on goroutines and (bidirectional) channels. This model
    has been very popular in the Go community and directly reflects many of the core values of
    the Go language, including simplicity and preference for programming with low levels of
    abstraction.  I have no evidence that this is the case, but I speculate that this model was
    influenced by the domains that Go thrives in: the Go model of channels and communicating
    independent goroutines almost directly reflects how servers communicate over network
    connections (including core operations like `select`).
    
    The proposed Swift design is higher abstraction than the Go model, but directly reflects one
    of the most common patterns seen in Go: a goroutine whose body is an infinite loop over a
    channel, decoding messages to the channel and acting on them.  Perhaps the most simple
    example is this Go code (adapted from [this blog
    post](https://www.golang-book.com/books/intro/10)):
    
    ```go
    func printer(c chan string) {
      for {
        msg := <- c
        fmt.Println(msg)
      }
    }
    ```
    
    ... is basically analogous to this proposed Swift code:
    
    ```swift
    actor Printer {
      actor func print(message: String) {
        print(message)
      }
    }
    ```
    
    The Swift design is more declarative than the Go code, but doesn't show many advantages
    or disadvantages in something this small.  However, with more realistic examples, the
    advantages of the higher-level declarative approach show benefit.  For example,
    it is common for goroutines to listen on multiple channels, one for each message they
    respond to.  This example (borrowed from [this blog
    post](http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/)) is fairly
    typical:
    
    ```go
    // Worker represents the worker that executes the job
    type Worker struct {
      WorkerPool  chan chan Job
      JobChannel  chan Job
      quit        chan bool
    }
    
    func NewWorker(workerPool chan chan Job) Worker {
      return Worker{
        JobChannel: make(chan Job),
        quit:       make(chan bool)}
    }
    
    func (w Worker) Start() {
      go func() {
        for {
          select {
          case job := <-w.JobChannel:
            // ...
          case <-w.quit:
            // ...
          }
        }
      }()
    }
    
    // Stop signals the worker to stop listening for work requests.
    func (w Worker) Stop() {
      go func() {
        w.quit <- true
      }()
    }
    ```
    
    This sort of thing is much more naturally expressed in our proposal model:
    
    ```swift
    actor Worker {
      actor func do(job: Job) {
        // ...
      }
    
      actor func stop() {
        // ...
      }
    }
    ```
    
    That said, there are advantages and other tradeoffs to the Go model as well.  Go builds on
    [CSP](https://en.wikipedia.org/wiki/Communicating_sequential_processes), which allows
    more adhoc structures of communication.  For example, because
    goroutines can listen to multiple channels it is occasionally easier to set up some (advanced)
    communication patterns.  Synchronous messages to a channel can only be completely sent
    if there is something listening and waiting for them, which can lead to performance
    advantages (and some disadvantages).  Go doesn't
    attempt to provide any sort of memory safety or data isolation, so goroutines have the
    usual assortment of mutexes and other APIs to use, and are subject to standard bugs like
    deadlocks and [data races](http://accelazh.github.io/go/Goroutine-Can-Race).  Races can
    even break [memory safety](https://research.swtch.com/gorace).
    
    I think that the most important thing the Swift community can learn from Go's concurrency
    model is the huge benefit that comes from a highly scalable runtime model.  It is common to
    have hundreds of thousands or even a million goroutines running around in a server.  The
    ability to stop worrying about "running out of threads" is huge, and is one of the key decisions
    that contributed to the rise of Go in the cloud.
    
    The other lesson is that (while it is important to have a "best default" solution to reach for in
    the world of concurrency) we shouldn't overly restrict the patterns that developers are allowed
    to express.  This is a key reason why the async/await design is independent of futures or any
    other abstraction.  A channel library in Swift will be as efficient as the one in Go, and if shared
    mutable state and channels are the best solution to some specific problem, then we should
    embrace that fact, not hide from it.  That said, I expect these cases to be very rare :-)
    
    ### Rust
    
    Rust's approach to concurrency builds on the strengths of its ownership system to allow
    library-based concurrency patterns to be built on top.  Rust supports message passing
    (through channels), but also support locks and other typical abstractions for shared mutable
    state.  Rust's approaches are well suited for systems programmers, which are the primary
    target audience of Rust.
    
    On the positive side, the Rust design provides a lot of flexibility, a wide range of different
    concurrency primitives to choose from, and familiar abstractions for C++ programmers.
    
    On the downside, their ownership model has a higher learning curve than the design
    described here, their abstractions are typically very low level (great for systems programmers,
    but not as helpful for higher levels), and they don't provide much guidance for programmers
    about which abstractions to choose, how to structure an application, etc.  Rust also doesn't
    provide an obvious model to scale into distributed applications.
    
    That said, improving synchronization for Swift systems programmers will be a goal once the
    basics of the [Swift Ownership
    Model](https://github.com/apple/swift/blob/master/docs/OwnershipManifesto.md) come
    together.  When that happens, it makes sense to take another look at the Rust abstractions
    to see which would make sense to bring over to Swift.
    
    
    

    public by snip2code modified Aug 13, 2017  117  0  3  1

    First Snippet: How to play with Snip2Code

    This is the first example of a snippet: - the title represents in few words which is the exact issue the snippet resolves; it can be something like the name of a method; - the description (this field) is an optional field where you can add interesting information regarding the snippet; something like the comment on the head of a method; - the c
    /* place here the actual content of your snippet. 
       It should be code or pseudo-code. 
       The less dependencies from external stuff, the better! */

    public by vdt modified Aug 5, 2017  23  0  1  0

    resources for job applicants

    resources for job applicants: job_applicant_resources.md
    Hello!
    
    Here are some resources that I've used in my recent job search for full-stack web development positions. 
    Feel free to share.
    
    
    what questions to ask
    ---------------------------
    * https://jvns.ca/blog/2013/12/30/questions-im-asking-in-interviews is a good starting point to come up with questions to ask potential employers
    
    
    where to find tech companies
    --------------------------------
    * https://www.crunchbase.com/#/home/index was THE search engine for finding tech companies based on criteria
    * www.hired.com (where I ultimately found the place where I'm working at now, i love the transparent salary data)
    * https://angel.co for early-stage tech startups
    
    
    statistics on tech companies
    ---------------------------------
    * hired.com's annual reporto on market rate salaries: 
    https://hired.com/blog/highlights/hired-releases-second-annual-global-state-salaries-report/
    * ALL of hired.com's blog posts are well-researched, well-written
    
    
    algorithms interview prep
    ---------------------------------
    * https://www.interviewcake.com/ InterviewCake interactive in-browser questions
    * http://blog.gainlo.co/ has nice blog posts going in-depth on specific algorithms topics
    
    
    random apps
    ---------------------------------
    * MixMax: a gmail inbox plugin. Lots of features. The most useful feature for me was calendar scheduling for interviews.
    * any reminder app to alert me that I have N minutes before my next interview meeting and amount of traffic to get there
    
    

    public by AbhishekGhosh modified Aug 4, 2017  16  0  1  0

    gist clone test

    gist clone test: gist.md
    This is example.
    
    

    public by vdt modified Jul 9, 2017  19  0  1  0

    A collection of notable Rust blog posts

    A collection of notable Rust blog posts: all-the-rust-blogs.md
    - Introduction
      - [Understanding Over Guesswork](https://www.hoverbear.org/2015/09/12/understand-over-guesswork/)
      - [An Alternative Introduction to Rust](http://words.steveklabnik.com/a-new-introduction-to-rust)
      - [Rust and CSV Parsing](http://blog.burntsushi.net/csv/)
    - Ownership
      - [Where Rust Really Shines](https://manishearth.github.io/blog/2015/05/03/where-rust-really-shines/)
      - [Rust Means Never Having to Close a Socket](http://blog.skylight.io/rust-means-never-having-to-close-a-socket/)
      - [The Problem with Single-threaded Shared Mutability](https://manishearth.github.io/blog/2015/05/17/the-problem-with-shared-mutability/)
      - [Rust Ownership the Hard Way](https://chrismorgan.info/blog/rust-ownership-the-hard-way.html)
      - [Strategies for Solving "cannot move out of" Borrowing Errors](http://hermanradtke.com/2015/06/09/strategies-for-solving-cannot-move-out-of-borrowing-errors-in-rust.html)
      - Interior Mutability In Rust
        - [Interior mutability in Rust: what, why, how?](https://ricardomartins.cc/2016/06/08/interior-mutability)
        - [Interior mutability in Rust, part 2: thread safety](https://ricardomartins.cc/2016/06/25/interior-mutability-thread-safety)
        - [Interior mutability in Rust, part 3: behind the curtain](https://ricardomartins.cc/2016/07/11/interior-mutability-behind-the-curtain)
      - [`&` vs. `ref` in Patterns](http://xion.io/post/code/rust-patterns-ref.html)
      - Holy `std::borrow::Cow`
        - [Holy `std::borrow::Cow`!](https://llogiq.github.io/2015/07/09/cow.html)
        - [Holy `std::borrow::Cow` Redux!](https://llogiq.github.io/2015/07/10/cow-redux.html)
      - [Graphical Depiction of Ownership and Borrowing in Rust](https://rufflewind.com/2017-02-15/rust-move-copy-borrow)
      - [Wrapper Types in Rust: Choosing Your Guarantees](https://manishearth.github.io/blog/2015/05/27/wrapper-types-in-rust-choosing-your-guarantees/)
    - Concurrency
      - [Fearless Concurrency with Rust](http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.html)
      - [How Rust Achieves Thread Safety](https://manishearth.github.io/blog/2015/05/30/how-rust-achieves-thread-safety/)
      - [Defaulting to Thread-safety: Closures and Concurrency](https://huonw.github.io/blog/2015/05/defaulting-to-thread-safety/)
      - [Some Notes on `Send` and `Sync`](https://huonw.github.io/blog/2015/02/some-notes-on-send-and-sync/)
      - Niko's Rayon Quadrilogy
        - [Rayon: Data Parallism in Rust](http://smallcultfollowing.com/babysteps/blog/2015/12/18/rayon-data-parallelism-in-rust/)
        - [Parallel Iterators in Rust Part 1: Foundations](http://smallcultfollowing.com/babysteps/blog/2016/02/19/parallel-iterators-part-1-foundations/)
        - [Parallel Iterators in Rust Part 2: Producers](http://smallcultfollowing.com/babysteps/blog/2016/02/25/parallel-iterators-part-2-producers/)
        - [Parallel Iterators in Rust Part 3: Consumers](http://smallcultfollowing.com/babysteps/blog/2016/11/14/parallel-iterators-part-3-consumers/)
      - [Parallelizing Enjarify in Go and Rust](https://medium.com/@robertgrosse/parallelizing-enjarify-in-go-and-rust-21055d64af7e)
    - Traits
      - [Abstraction Without Overhead](https://blog.rust-lang.org/2015/05/11/traits.html)
      - [Going Down the Rabbit Hole with Rust Traits](http://www.jonathanturner.org/2016/02/down-the-rabbit-hole-with-traits.html)
      - Huon's Trait Object Quadrilogy
        - [Peeking Inside Trait Objects](https://huonw.github.io/blog/2015/01/peeking-inside-trait-objects/)
        - [The `Sized` Trait](https://huonw.github.io/blog/2015/01/the-sized-trait/)
        - [Object Safety](http://huonw.github.io/blog/2015/01/object-safety/)
        - [Where `Self` meets `Sized`: Revisiting Object Safety](https://huonw.github.io/blog/2015/05/where-self-meets-sized-revisiting-object-safety/)
      - [Rust's Built-in Traits, the When, How & Why](https://llogiq.github.io/2015/07/30/traits.html)
      - [Rust Traits for Developer Friendly Libraries](https://benashford.github.io/blog/2015/05/24/rust-traits-for-developer-friendly-libraries/)
    - Macros
      - [A Practical Introduction to Rust Macros](https://danielkeep.github.io/practical-intro-to-macros.html)
      - Macros In Rust
        - [Part 1](http://www.ncameron.org/blog/macros-in-rust-pt1/)
        - [Part 2](http://www.ncameron.org/blog/macros-in-rust-pt2/)
        - [Part 3](http://www.ncameron.org/blog/macros-in-rust-pt3/)
        - [Part 4](http://www.ncameron.org/blog/macros-in-rust-pt4/)
      - [Creating an enum iterator using Macros 1.1](https://cbreeden.github.io/Macros11/)
      - [An Overview of Macros in Rust](http://words.steveklabnik.com/an-overview-of-macros-in-rust)
    - The Rust Language
      - [Finding Closure in Rust](https://huonw.github.io/blog/2015/05/finding-closure-in-rust/)
      - [Mixing Matching, Mutations and Moves](https://blog.rust-lang.org/2015/04/17/Enums-match-mutation-and-moves.html)
      - [Reading Rust Function Signatures](http://hoverbear.org/2015/07/10/reading-rust-function-signatures/)
      - [Myths and Legends About Integer Overflow in Rust](https://huonw.github.io/blog/2016/04/myths-and-legends-about-integer-overflow-in-rust/)
      - [Effectively Using Iterators in Rust](http://hermanradtke.com/2015/06/22/effectively-using-iterators-in-rust.html)
      - [A Journey Into Iterators](https://hoverbear.org/2015/05/02/a-journey-into-iterators/)
    - `unsafe` Rust
      - [Unsafe Rust: An Intro and Open Questions](http://cglab.ca/~abeinges/blah/rust-unsafe-intro/)
      - [What Does Rust's `unsafe` Mean?](https://huonw.github.io/blog/2014/07/what-does-rusts-unsafe-mean/)
      - [Memory Leaks are Memory Safe](https://huonw.github.io/blog/2016/04/memory-leaks-are-memory-safe/)
      - [On Reference Counting and Leaks](http://smallcultfollowing.com/babysteps/blog/2015/04/29/on-reference-counting-and-leaks/)
      - [A Few More Remarks on Reference Counting and Leaks](http://smallcultfollowing.com/babysteps/blog/2015/04/30/a-few-more-remarks-on-reference-counting-and-leaks/)
      - [Pre-pooping Your Pants With Rust](http://cglab.ca/~abeinges/blah/everyone-poops/)
      - Niko's Unsafe Abstractions Series
        - [Unsafe Abstractions](http://smallcultfollowing.com/babysteps/blog/2016/05/23/unsafe-abstractions/)
        - [The "Tootsie Pop" Model for Unsafe Code](http://smallcultfollowing.com/babysteps/blog/2016/05/27/the-tootsie-pop-model-for-unsafe-code/)
        - ["Tootsie Pop" Followup](http://smallcultfollowing.com/babysteps/blog/2016/08/18/tootsie-pop-followup/)
        - [Thoughts on Trusting Types in Unsafe Code](http://smallcultfollowing.com/babysteps/blog/2016/09/12/thoughts-on-trusting-types-and-unsafe-code/)
        - [Observational Equivalence and Unsafe Code](http://smallcultfollowing.com/babysteps/blog/2016/10/02/observational-equivalence-and-unsafe-code/)
        - [Assigning Blame to Unsafe Code](http://smallcultfollowing.com/babysteps/blog/2017/01/22/assigning-blame-to-unsafe-code/)
        - [Unsafe Code and Shared References](http://smallcultfollowing.com/babysteps/blog/2017/02/01/unsafe-code-and-shared-references/)
      - [How MutexGuard was Sync When It Should Not Have Been](https://www.ralfj.de/blog/2017/06/09/mutexguard-sync.html)
      - [The Scope of Unsafe](https://www.ralfj.de/blog/2016/01/09/the-scope-of-unsafe.html)
    - Rust in Practice
      - [The Many Kinds of Code Reuse in Rust](http://cglab.ca/~abeinges/blah/rust-reuse-and-recycle/)
      - [Rust Error Handling](http://blog.burntsushi.net/rust-error-handling/)
      - [Why your first FizzBuzz implementation may not work](https://chrismorgan.info/blog/rust-fizzbuzz.html)
      - Herman Radtke's `String` Trilogy
        - [`String` vs. `&str` in Rust Functions](http://hermanradtke.com/2015/05/03/string-vs-str-in-rust-functions.html)
        - [Creating a Rust Function That Accepts `String` or `&str`](http://hermanradtke.com/2015/05/06/creating-a-rust-function-that-accepts-string-or-str.html)
        - [Creating a Rust Function That Returns `String` or `&str`](http://hermanradtke.com/2015/05/29/creating-a-rust-function-that-returns-string-or-str.html)
      - Gankro's Collections Trilogy
        - [Rust, Lifetimes, and Collections](http://cglab.ca/~abeinges/blah/rust-lifetimes-and-collections/)
        - [Rust, Generics, and Collections](http://cglab.ca/~abeinges/blah/rust-generics-and-collections/)
        - [Rust Collections Case Study: BTreeMap](http://cglab.ca/~abeinges/blah/rust-btree-case/)
      - [Learning Rust with Entirely Too Many Linked Lists](http://cglab.ca/~abeinges/blah/too-many-lists/book/)
      - [Working With C Unions in Rust FFI](http://hermanradtke.com/2016/03/17/unions-rust-ffi.html)
      - [Quick tip: the `#[cfg_attr]` attribute](https://chrismorgan.info/blog/rust-cfg_attr.html)
      - Using the `Option` Type Effectively
        - [Part 1](http://blog.8thlight.com/dave-torre/2015/03/11/the-option-type.html)
        - [Part 2](http://blog.8thlight.com/uku-taht/2015/04/29/using-the-option-type-effectively.html)
      - [Rust + Nix = Easier Unix Systems Programming](http://kamalmarhubi.com/blog/2016/04/13/rust-nix-easier-unix-systems-programming-3/)
      - [ripgrep code review](http://blog.mbrt.it/2016-12-01-ripgrep-code-review/)
      - [Elegant Library APIs in Rust](https://scribbles.pascalhertleif.de/elegant-apis-in-rust.html)
      - [gnome-class: Integrating Rust and the GNOME object system](http://smallcultfollowing.com/babysteps/blog/2017/05/02/gnome-class-integrating-rust-and-the-gnome-object-system/)
      - [Making Terminal Applications in Rust with Termion](http://ticki.github.io/blog/making-terminal-applications-in-rust-with-termion/)
      - [Exploring Rust's standard library: system calls and errors](https://people.gnome.org/~federico/blog/rust-libstd-syscalls-and-errors.html)
      - [Starting a New Rust Project Right, with error-chain](http://brson.github.io/2016/11/30/starting-with-error-chain)
    - Async I/O
      - [Getting Acquainted with `mio`](https://hoverbear.org/2015/03/03/getting-acquainted-with-mio/)
      - [My Basic Understanding of `mio` and Async I/O](http://hermanradtke.com/2015/07/12/my-basic-understanding-of-mio-and-async-io.html)
      - [Creating a Simple Protocol With `mio`](http://hermanradtke.com/2015/09/12/creating-a-simple-protocol-when-using-rust-and-mio.html)
      - [Managing Connection State With `mio`](http://hermanradtke.com/2015/10/23/managing-connection-state-with-mio-rust.html)
      - [Zero-cost Futures in Rust](http://aturon.github.io/blog/2016/08/11/futures/)
      - [Designing Futures for Rust](http://aturon.github.io/blog/2016/09/07/futures-design/)
      - [Asynchronous Rust for Fun and Profit](http://xion.io/post/programming/rust-async-closer-look.html)
    - Performance
      - [Benchmarking In Rust](https://llogiq.github.io/2015/06/16/bench.html)
      - [Profiling Rust Applications on Linux](https://llogiq.github.io/2015/07/15/profiling.html)
      - [Does Your Code Leave a Trail of Slowness?](https://jackmott.github.io/2017/02/27/trail-of-slow.html)
      - [Rust Faster!](https://llogiq.github.io/2015/10/03/fast.html)
      - [Rust Performance: A story featuring perf and flamegraph on Linux](http://blog.adamperry.me/rust/2016/07/24/profiling-rust-perf-flamegraph/)
      - [Zero-cost abstractions](https://ruudvanasseldonk.com/2016/11/30/zero-cost-abstractions)
      - [ripgrep is faster than {grep, ag, git grep, ucg, pt, sift}](http://blog.burntsushi.net/ripgrep/)
      - [Rust Performance Pitfalls](https://llogiq.github.io/2017/06/01/perf-pitfalls.html)
      - [Optimizing `Rc` Memory Usage in Rust](https://medium.com/@robertgrosse/optimizing-rc-memory-usage-in-rust-6652de9e119e)
    - The Rust Toolbox
      - Travis on the Train
        - [Helping Travis Catch the `rustc` Train](http://huonw.github.io/blog/2015/04/helping-travis-catch-the-rustc-train/)
        - [Travis on the Train, Part 2](http://huonw.github.io/blog/2015/05/travis-on-the-train-part-2/)
      - [Rust, Travis and GitHub Pages](http://hoverbear.org/2015/03/07/rust-travis-github-pages/)
      - [Fuzzing is Magic](https://www.nibor.org/blog/fuzzing-is-magic---or-how-i-found-a-panic-in-rusts-regex-library/)
      - [Rust Code Coverage Guide: kcov + Travis CI + Codecov / Coveralls](http://sunjay.ca/2016/07/25/rust-code-coverage)
    - Internals
      - [Optimizing Rust Struct Size: A 6-month Compiler Development Project](http://camlorn.net/posts/April%202017/rust-struct-field-reordering.html)
      - [Rust Tidbits: What Is a Lang Item?](http://manishearth.github.io/blog/2017/01/11/rust-tidbits-what-is-a-lang-item/)
      - [Reflections on Rusting Trust](https://manishearth.github.io/blog/2016/12/02/reflections-on-rusting-trust/)
    - Culture
      - [Stability as a Deliverable](https://blog.rust-lang.org/2014/10/30/Stability.html)
      - [The Not Rocket Science Rule of Software Engineering](http://graydon2.dreamwidth.org/1597.html)
      - RIIR
        - [Rewrite Everything In Rust](http://robert.ocallahan.org/2016/02/rewrite-everything-in-rust.html)
        - [Have You Considered Rewriting it In Rust?](http://transitiontech.ca/random/RIIR)
      - [Making Your Open Source Project Newcomer Friendly](http://manishearth.github.io/blog/2016/01/03/making-your-open-source-project-newcomer-friendly/)
      - [Rust Discovery, or: How I Figure Things Out](http://carol-nichols.com/2015/08/01/rustc-discovery/)
      - [The Minimally-nice Open Source Software Maintainer](http://brson.github.io/2017/04/05/minimally-nice-maintainer)
      - [Fireflowers](https://brson.github.io/fireflowers/)
        - [Rust is More than Safety](http://words.steveklabnik.com/rust-is-more-than-safety)
        - [Rust is Mostly Safety](http://graydon2.dreamwidth.org/247406.html)
        - [Safety is Rust's Fireflower](https://thefeedbackloop.xyz/safety-is-rusts-fireflower/)
        - [Fire Mario, Not Fire Flowers](http://words.steveklabnik.com/fire-mario-not-fire-flowers)
        - [Fire Flowers and Marios](https://medium.com/@ag_dubs/fire-flowers-and-marios-marketing-rust-996b3fdbe8f3)
        - [Rust is About Productivity](http://www.ncameron.org/blog/rust-is-about-productivity/)
        - [Rust is Its Community](https://mgattozzi.com/rust-is)
        - [My Thoughts on Rust in 2017](https://medium.com/@Hisako1337/rust-in-2017-8f2b57a67d9b#.3eegqri2g)
        - [Rust is Software's Salvation](https://redox-os.org/news/rust-is-softwares-salvation-17/)
        - [Rust is My Magic Whistle](http://anowell.com/posts/why-rust.html)
        - [Rust is Something Old Made New](http://panpanick.ninja/30-12-2016.html)
        - [Rust is Needed Now More than Ever](https://llogiq.github.io/2016/12/27/retro.html)
        - [Rust is About Boldness](https://www.reddit.com/r/rust/comments/5lo6ny/rust_is_about_boldness/)
        - [Rust is About Better Citizenship](https://kasma1990.gitlab.io/2017/01/01/rust-is-about-better-citizenship/)
        - [Rust Marketing Pitch](https://kasma1990.gitlab.io/2017/01/01/rust-is-about-better-citizenship/)
    - Cheat Sheets
      - [Periodic Table of Rust Types](http://cosmic.mearie.org/2014/01/periodic-table-of-rust-types)
      - [Rust String Conversions Cheat Sheet](https://docs.google.com/spreadsheets/d/19vSPL6z2d50JlyzwxariaYD6EU2QQUQqIDOGbiGQC7Y/pubhtml?gid=0&single=true)
      - [Rust Iterator Cheat Sheet](https://danielkeep.github.io/itercheat_baked.html)
      - [Rust Container Cheet Sheet](https://docs.google.com/presentation/d/1q-c7UAyrUlM-eZyTo1pd8SZ0qwA_wYxmPZVOQkoDmH4/edit)
    - Additional Reading
      - [The Book](http://doc.rust-lang.org/nightly/book)
      - [The Nomicon](https://doc.rust-lang.org/nightly/nomicon/)
      - [Rust By Example](https://www.rustbyexample.com)
      - [Writing an OS in Rust](http://os.phil-opp.com/)
      - [Rust 101](https://www.ralfj.de/projects/rust-101/main.html)
      - [rustlings](https://github.com/carols10cents/rustlings)
      - [The Little Book of Rust Macros](https://danielkeep.github.io/tlborm/)
      - [The Rust FFI Omnibus](http://jakegoulding.com/rust-ffi-omnibus/?updated=2015-11-08)
    - Uncategorized Chapters
      - [Why Is A Rust Executable Large?](https://lifthrasiir.github.io/rustlog/why-is-a-rust-executable-large.html)
      - [Where Are You `From::from`?](https://llogiq.github.io/2015/11/27/from-into.html)
      - Type-level Shenanigans
        - [Type-level Shenanigans](https://llogiq.github.io/2015/12/12/types.html)
        - [More Type-level Shenanigans](https://llogiq.github.io/2016/02/23/moretypes.html)
      - [Rustic Bits](https://llogiq.github.io/2016/02/11/rustic.html)
      - [Mapping Over Arrays](https://llogiq.github.io/2016/04/28/arraymap.html)
      - [Rust for Functional Programmers](http://science.raphael.poss.name/rust-for-functional-programmers.html)
      - [From &str to Cow](http://blog.jwilm.io/from-str-to-cow/)
      - Graydon's Lists
        - [Five Lists of Six Things About Rust](http://graydon2.dreamwidth.org/214016.html)
        - [Things Rust Shipped Without](http://graydon2.dreamwidth.org/218040.html)
      - [for loops in Rust](http://xion.io/post/code/rust-for-loop.html)
      - [Convenient and Idiomatic Conversions in Rust](https://ricardomartins.cc/2016/08/03/convenient_and_idiomatic_conversions_in_rust)
      - [Using and_then and map combinators on the Rust Result Type](http://hermanradtke.com/2016/09/12/rust-using-and_then-and-map-combinators-on-result-type.html)
      - [I used to use pointers. Now what?](https://github.com/diwic/reffers-rs/blob/master/docs/Pointers.md)
      - Let’s Stop Ascribing Meaning to Code Points
        - [Let’s Stop Ascribing Meaning to Code Points](http://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-unicode-code-points/)
        - [Breaking Our Latin-1 Assumptions](https://manishearth.github.io/blog/2017/01/15/breaking-our-latin-1-assumptions/)
      - [What are Sum, Product, and Pi types?](https://manishearth.github.io/blog/2017/03/04/what-are-sum-product-and-pi-types/)
      - [Rust your ARM microcontroller!](http://blog.japaric.io/quickstart/)
      - [It's Time for a Memory Safety Intervention](https://tonyarcieri.com/it-s-time-for-a-memory-safety-intervention)
      - [Bugs You'll Probably Only Have in Rust](https://gankro.github.io/blah/only-in-rust/)
      - [Four Years With Rust](http://words.steveklabnik.com/four-years-with-rust)
      - [My Experience Writing Enjarify in Rust](https://medium.com/@robertgrosse/my-experience-rewriting-enjarify-in-rust-723089b406ad)
      - [Rust's Type System is Turing-Complete](https://sdleffler.github.io/RustTypeSystemTuringComplete/)
      - [Rust Makes Invariants Explicit](https://medium.com/@robertgrosse/rust-makes-implicit-invariants-explicit-baf4cf17ae50)
      - [Rust: A Scala Engineer's Perspective](https://beachape.com/blog/2017/05/24/rust-from-scala/)
    
    
    

    public by vdt modified Jun 16, 2017  24  0  1  0

    My simply Git Cheatsheet

    My simply Git Cheatsheet: README.md
    Using Git
    ===============
    
    Global Settings
    -----------
    
    Related Setup: https://gist.github.com/hofmannsven/6814278
    
    Related Pro Tips: https://ochronus.com/git-tips-from-the-trenches/
    
    Interactive Beginners Tutorial: http://try.github.io/
    
    
    Reminder
    -----------
    
    Press `minus + shift + s` and `return` to chop/fold long lines!
    
    Show folder content: `ls -la`
    
    
    Notes
    -----------
    
    Do not put (external) dependencies in version control!
    
    
    Setup
    -----------
    
    See where Git is located:
    `which git`
    
    Get the version of Git:
    `git --version`
    
    Create an alias (shortcut) for `git status`:
    `git config --global alias.st status`
    
    
    Help
    -----------
    
    Help:
    `git help`
    
    
    General
    -----------
    
    Initialize Git:
    `git init`
    
    Get everything ready to commit:
    `git add .`
    
    Get custom file ready to commit:
    `git add index.html`
    
    Commit changes:
    `git commit -m "Message"`
    
    Add and commit in one step:
    `git commit -am "Message"`
    
    Remove files from Git:
    `git rm index.html`
    
    Update all changes:
    `git add -u`
    
    Remove file but do not track anymore:
    `git rm --cached index.html`
    
    Move or rename files:
    `git mv index.html dir/index_new.html`
    
    Undo modifications (restore files from latest commited version):
    `git checkout -- index.html`
    
    Restore file from a custom commit (in current branch):
    `git checkout 6eb715d -- index.html`
    
    
    Reset
    -----------
    
    Go back to commit:
    `git revert 073791e7dd71b90daa853b2c5acc2c925f02dbc6`
    
    Soft reset (move HEAD only; neither staging nor working dir is changed):
    `git reset --soft 073791e7dd71b90daa853b2c5acc2c925f02dbc6`
    
    Undo latest commit: `git reset --soft HEAD~ `
    
    Mixed reset (move HEAD and change staging to match repo; does not affect working dir):
    `git reset --mixed 073791e7dd71b90daa853b2c5acc2c925f02dbc6`
    
    Hard reset (move HEAD and change staging dir and working dir to match repo):
    `git reset --hard 073791e7dd71b90daa853b2c5acc2c925f02dbc6`
    
    Update & Delete
    -----------
    
    Test-Delete untracked files:
    `git clean -n`
    
    Delete untracked files (not staging):
    `git clean -f`
    
    Unstage (undo adds):
    `git reset HEAD index.html`
    
    Commit to most recent commit:
    `git commit --amend -m "Message"`
    
    Update most recent commit message:
    `git commit --amend -m "New Message"`
    
    
    Branch
    -----------
    
    Show branches:
    `git branch`
    
    Create branch:
    `git branch branchname`
    
    Change to branch:
    `git checkout branchname`
    
    Create and change to new branch:
    `git checkout -b branchname`
    
    Rename branch:
    `git branch -m branchname new_branchname` or:
    `git branch --move branchname new_branchname`
    
    Show all completely merged branches with current branch:
    `git branch --merged`
    
    Delete merged branch (only possible if not HEAD):
    `git branch -d branchname` or:
    `git branch --delete branchname`
    
    Delete not merged branch:
    `git branch -D branch_to_delete`
    
    
    Merge
    -----------
    
    True merge (fast forward):
    `git merge branchname`
    
    Merge to master (only if fast forward):
    `git merge --ff-only branchname`
    
    Merge to master (force a new commit):
    `git merge --no-ff branchname`
    
    Stop merge (in case of conflicts):
    `git merge --abort`
    
    Stop merge (in case of conflicts):
    `git reset --merge` // prior to v1.7.4
    
    Merge only one specific commit: 
    `git cherry-pick 073791e7`
    
    Stash
    -----------
    
    Put in stash:
    `git stash save "Message"`
    
    Show stash:
    `git stash list`
    
    Show stash stats:
    `git stash show stash@{0}`
    
    Show stash changes:
    `git stash show -p stash@{0}`
    
    Use custom stash item and drop it:
    `git stash pop stash@{0}`
    
    Use custom stash item and do not drop it:
    `git stash apply stash@{0}`
    
    Delete custom stash item:
    `git stash drop stash@{0}`
    
    Delete complete stash:
    `git stash clear`
    
    
    Gitignore & Gitkeep
    -----------
    
    About: https://help.github.com/articles/ignoring-files
    
    Useful templates: https://github.com/github/gitignore
    
    Add or edit gitignore: 
    `nano .gitignore`
    
    Track empty dir: 
    `touch dir/.gitkeep`
    
    
    Log
    -----------
    
    Show commits:
    `git log`
    
    Show oneline-summary of commits:
    `git log --oneline`
    
    Show oneline-summary of commits with full SHA-1:
    `git log --format=oneline`
    
    Show oneline-summary of the last three commits:
    `git log --oneline -3`
    
    Show only custom commits:
    `git log --author="Sven"`
    `git log --grep="Message"`
    `git log --until=2013-01-01`
    `git log --since=2013-01-01`
    
    Show only custom data of commit:
    `git log --format=short`
    `git log --format=full`
    `git log --format=fuller`
    `git log --format=email`
    `git log --format=raw`
    
    Show changes:
    `git log -p`
    
    Show every commit since special commit for custom file only:
    `git log 6eb715d.. index.html`
    
    Show changes of every commit since special commit for custom file only:
    `git log -p 6eb715d.. index.html`
    
    Show stats and summary of commits:
    `git log --stat --summary`
    
    Show history of commits as graph:
    `git log --graph`
    
    Show history of commits as graph-summary:
    `git log --oneline --graph --all --decorate`
    
    
    Compare
    -----------
    
    Compare modified files:
    `git diff`
    
    Compare modified files and highlight changes only:
    `git diff --color-words index.html`
    
    Compare modified files within the staging area:
    `git diff --staged`
    
    Compare branches:
    `git diff master..branchname`
    
    Compare branches like above:
    `git diff --color-words master..branchname^`
    
    Compare commits:
    `git diff 6eb715d`
    `git diff 6eb715d..HEAD`
    `git diff 6eb715d..537a09f`
    
    Compare commits of file:
    `git diff 6eb715d index.html`
    `git diff 6eb715d..537a09f index.html`
    
    Compare without caring about spaces:
    `git diff -b 6eb715d..HEAD` or:
    `git diff --ignore-space-change 6eb715d..HEAD`
    
    Compare without caring about all spaces:
    `git diff -w 6eb715d..HEAD` or:
    `git diff --ignore-all-space 6eb715d..HEAD`
    
    Useful comparings:
    `git diff --stat --summary 6eb715d..HEAD`
    
    Blame:
    `git blame -L10,+1 index.html`
    
    
    Releases & Version Tags
    -----------
    
    Show all released versions:
    `git tag`
    
    Show all released versions with comments:
    `git tag -l -n1`
    
    Create release version:
    `git tag v1.0.0`
    
    Create release version with comment:
    `git tag -a v1.0.0 -m 'Message'`
    
    Checkout a specific release version:
    `git checkout v1.0.0`
    
    
    Collaborate
    -----------
    
    Show remote:
    `git remote`
    
    Show remote details:
    `git remote -v`
    
    Add remote origin from GitHub project:
    `git remote add origin https://github.com/user/project.git`
    
    Add remote origin from existing empty project on server:
    `git remote add origin ssh://root@123.123.123.123/path/to/repository/.git`
    
    Remove origin:
    `git remote rm origin`
    
    Show remote branches:
    `git branch -r`
    
    Show all branches:
    `git branch -a`
    
    Compare:
    `git diff origin/master..master`
    
    Push (set default with `-u`):
    `git push -u origin master`
    
    Push to default:
    `git push origin master`
    
    Fetch:
    `git fetch origin`
    
    Fetch a custom branch:
    `git fetch origin branchname:local_branchname`
    
    Pull:
    `git pull`
    
    Pull specific branch:
    `git pull origin branchname`
    
    Merge fetched commits:
    `git merge origin/master`
    
    Clone to localhost:
    `git clone https://github.com/user/project.git` or:
    `git clone ssh://user@domain.com/~/dir/.git`
    
    Clone to localhost folder:
    `git clone https://github.com/user/project.git ~/dir/folder`
    
    Clone specific branch to localhost:
    `git clone -b branchname https://github.com/user/project.git`
    
    Delete remote branch (push nothing):
    `git push origin :branchname` or:
    `git push origin --delete branchname`
    
    
    Archive
    -----------
    Create a zip-archive: `git archive --format zip --output filename.zip master`
    
    Export/write custom log to a file: `git log --author=sven --all > log.txt`
    
    
    Troubleshooting
    -----------
    
    Ignore files that have already been committed to a Git repository: http://stackoverflow.com/a/1139797/1815847
    
    
    Security
    -----------
    
    Hide Git on the web via `.htaccess`: `RedirectMatch 404 /\.git` 
    (more info here: http://stackoverflow.com/a/17916515/1815847)
    
    
    Large File Storage
    -----------
    
    Website: https://git-lfs.github.com/
    
    Install: `brew install git-lfs`
    
    Track `*.psd` files: `git lfs track "*.psd"` (init, add, commit and push as written above)
    
    
    • Public Snippets
    • Channels Snippets