working on it ...

Filters

snippets
881k
followers
33
Published by snip2code

MarkDown

This channel collects useful snippets for MarkDown language
Sort by

Found 881k snippets

    public by yourfriendcaspian modified Sep 2, 2017  114  0  2  0

    Instructions for installing the most popular webdrivers, and then the latest version of the standalone Selenium server

    Instructions for installing the most popular webdrivers, and then the latest version of the standalone Selenium server: selenium-instructions.md
    # Steps For Setting Up Selenium And The Webdrivers
    
    ### Install The Firefox Geckodriver
    
    * Download [the latest Geckodriver for Firefox](https://github.com/mozilla/geckodriver/releases)
    * then `mv` that file to `/usr/local/bin/geckodriver` and `sudo chmod +x /usr/local/bin/geckodriver`
    * make sure you have `"webdriver.firefox.profile" : "geckodriver",` in your `nightwatch.json` file if you are using it
    
    ### Install The Chromedriver
    
    * Download the latest version [from the Chrome site](https://sites.google.com/a/chromium.org/chromedriver/downloads)
    * unzip it if it is a zip file
    * then `mv` that file to `/usr/local/bin/chromedriver` and `sudo chmod +x /usr/local/bin/chromedriver`
    
    ### Install the Safari Driver
    
    * Download the `SafariDriver.safariextz` [from the release site](http://selenium-release.storage.googleapis.com/index.html?path=2.45/)
    * Double click on the file and it will open in Safari
    * Accept the file as trusted
    * It will now show in your extensions
    
    ### Build the latest Selenium binary
    
    * `git clone git@github.com:SeleniumHQ/selenium.git`
    * `cd selenium`
    * `./go clean release`
    * `cd build/dist`
    * You can now run the server with the following: `java -jar selenium-server-standalone-3.0.0-beta1.jar`
    * _you may have a server of a different name depending on when you read this tutorial_
    
    ### Running the server
    
    * cd to the directory where you build the jar file
    * run: `java -jar selenium-server-standalone-3.0.0-beta1.jar`
    
    You can also alias the function in a `~/.bashrc` or `~/.zshrc` with:
    
    ```sh
    alias selenium="java -jar /path/to/build/dist/folder/selenium-server-standalone-3.0.0-beta1.jar"
    ```
    
    Remember: _You may have a server of a different name depending on when you read this tutorial_
    
    

    public by yourfriendcaspian modified Sep 2, 2017  104  0  1  0

    Cómo instalar MySql en Debian/derivados, crear Bases de Datos y Usuarios para usarlos en Django

    Cómo instalar MySql en Debian/derivados, crear Bases de Datos y Usuarios para usarlos en Django: Mysql and Django.md
    Instalar Mysql con Python y Django Debian/Derivados 
    =====================
     
     
    Para instalar necesitamos tener unas dependencias en el sistema por ahora mostraremos en equipos **Debian y Derivados**. Pero primero instalares actualizaciones y MySQL
     
     
    ----------
    Actualizaciones y MySQL
    ---------
     
    **Actualizar Sistema** lo hacemos con los siguientes comandos
     
    ```
    $ sudo apt-get update
    $ sudo apt-get upgrade
    ```
     
    > **NOTA:** Cada Sistema tiene sus comandos para actualización, si es que tu máquina no es derivada de Debian **buscalos** :D
     
    #### <i class="icon-file"></i> Instalación de MySQL
     
    Instalamos Mysql (5.5.*)
    ```
    $ sudo apt-get install mysql-server mysql-client
    Passwd for 'root' user: mypasswd
    ```
    Al final ejecutamos este comando para darle mas seguridad a nuestra BD
     
    ```
    $ mysql_secure_installation
    ```
    Revisar atentanmente los cambios que se harán, la primera pregunta es el passwd root
    SI desea mantenerla o cambiarla, y sigue con otras preguntas de seguridad.
     
     
    #### <i class="icon-folder-open"></i> Crear una base de datos y un usuario para la BD
     
    Ahora crearemos la BD a la que se conectara DJango y un Usuario con Passwd para que acceda a ella.
    Existen dos maneras de hacerlo:
    ```
    echo "CREATE DATABASE DATABASENAME;" | mysql -u root -p
    echo "CREATE USER 'DATABASEUSER'@'localhost' IDENTIFIED BY 'PASSWORD';" | mysql -u root -p
    echo "GRANT ALL PRIVILEGES ON DATABASENAME.* TO 'DATABASEUSER'@'localhost';" | mysql -u root -p
    echo "FLUSH PRIVILEGES;" | mysql -u root -p
    ```
    Así deberan poner su passwd de mysql en cada línea ó también pueden hacerlo de la siguiente manera
     
    ```
    $ mysql -u root -p
    ```
    Introducen su passwd y a continuación hacen lo siguiente.
     
    ```
    CREATE DATABASE DATABASENAME;
    CREATE USER 'DATABASEUSER'@localhost IDENTIFIED BY 'PASSWORD';
    GRANT ALL PRIVILEGES ON DATABASENAME.* TO 'DATABASEUSER'@localhost;
    FLUSH PRIVILEGES;
    exit
    ```
     
    #### <i class="icon-pencil"></i> Verificamos Dependencias
     
    Sólo hay unas cuantas dependencias pero hay que estar seguros
     
    ```
    $ sudo apt-get install libmysqlclient-dev python-dev
    ```
     
    #### <i class="icon-trash"></i> Instalando driver mysql-python con PIP
     
    Hasta aquí es todo sólo procedemos a instalar Pip en nuestro entorno virtual o globalmente
     
    ```
    $ sudo pip install mysql-python
    ```
     
     
    ----------
    <i class="icon-hdd"></i> Resumen
    ---------
    Como podrás ver ahora puedes crear DB y Usuarios para cada Proyecto de Django.
     
    
     
     
     
     
    
    
    

    public by yourfriendcaspian modified Sep 2, 2017  87  1  1  0

    Unattended Ubuntu Install- Download a non graphical Ubuntu installation ISO

    Unattended Ubuntu Install- Download a non graphical Ubuntu installation ISO: ubuntu_unattended_install.md
    https://askubuntu.com/a/122506/209043
    
    
    wget http://www.instalinux.com/download/iso1132.iso -O /root/iso1132.iso
    wget http://www.instalinux.com/download/preseed1132.txt -O preseed.cfg
    
    
    ---
    
    
    The complete solution is:
    
    Remaster a CD, ie, download a non graphical ubuntu installation ISO (server or alternate installation CD), mount it
    
        $ sudo su -
        # mkdir -p /mnt/iso
        # mount -o loop ubuntu.iso /mnt/iso
    
    Copy the relevant files to a different directory
    
        # mkdir -p /opt/ubuntuiso
        # cp -rT /mnt/iso /opt/ubuntuiso
    
    Prevent the language selection menu from appearing
    
        # cd /opt/ubuntuiso
        # echo en >isolinux/lang
    
    Use GUI program to add a kickstart file named `ks.cfg`
    
        # apt-get install system-config-kickstart
        # system-config-kickstart # save file to ks.cfg
    
    To add packages for the installation, add a `%package` section to the `ks.cfg` kickstart file, append to the end of `ks.cfg` file something like this.
    
        %packages
        @ ubuntu-server
        openssh-server
        ftp
        build-essential
    
    This will install the ubuntu-server "bundle", and will add the `openssh-server`, `ftp` and `build-essential` packages.
    
    Add a preseed file, to suppress other questions
    
        # echo 'd-i partman/confirm_write_new_label boolean true
        d-i partman/choose_partition \
        select Finish partitioning and write changes to disk
        d-i partman/confirm boolean true' > ks.preseed
    
    Set the boot command line to use the kickstart and preseed files
    
        # vi isolinux/txt.cfg
    
    Search for
    
        label install
          menu label ^Install Ubuntu Server
          kernel /install/vmlinuz
          append  file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet --
    
    add `ks=cdrom:/ks.cfg` and `preseed/file=/cdrom/ks.preseed` to the append line. You can remove the `quiet` and `vga=788` words. It should look like
    
          append file=/cdrom/preseed/ubuntu-server.seed \
             initrd=/install/initrd.gz \
             ks=cdrom:/ks.cfg preseed/file=/cdrom/ks.preseed --
    
    Now create a new iso
    
        # mkisofs -D -r -V "ATTENDLESS_UBUNTU" \
             -cache-inodes -J -l -b isolinux/isolinux.bin \
             -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 \
             -boot-info-table -o /opt/autoinstall.iso /opt/ubuntuiso
    
    That's it. You'll have a CD that would install an Ubuntu system once you boot from it, without requiring a single keystroke.
    
    
    ---
    
    
    To bypass the need to press enter on boot change the timeout value from `0` to `10` in `/isolinux/isolinux.cfg`: timeout `10` Note that a value of `10` represents `1` second
    
    
    
    ---
    
    # Last
    
    mkisofs -D -r -V "ATTENDLESS_UBUNTU"      -cache-inodes -J -l -b isolinux.bin      -c boot.cat -no-emul-boot -boot-load-size 4      -boot-info-table -o /root/autoinstall.iso /opt/ubuntuiso
    
    
    
    

    public by yourfriendcaspian modified Sep 2, 2017  65  0  1  0

    Raspberry Pi VPN Router

    Raspberry Pi VPN Router: raspberry-pi-vpn-router.md
    # Raspberry Pi VPN Router
    
    This is a quick-and-dirty guide to setting up a Raspberry Pi as a "[router on a stick](https://en.wikipedia.org/wiki/One-armed_router)" to [PrivateInternetAccess](http://privateinternetaccess.com/) VPN.
    
    ## Requirements
    
    Install Raspbian Jessie (`2016-05-27-raspbian-jessie.img`) to your Pi's sdcard.
    
    Use the **Raspberry Pi Configuration** tool or `sudo raspi-config` to:
    
    * Expand the root filesystem and reboot
    * Boot to commandline, not to GUI
    * Configure the right keyboard map and timezone
    * Configure the Memory Split to give 16Mb (the minimum) to the GPU
    * Consider overclocking to the Medium (900MHz) setting on Pi 1, or High (1000MHz) setting on Pi 2
    
    ## IP Addressing
    
    My home network is setup as follows:
    
    * Internet Router: `192.168.1.1`
    * Subnet Mask: `255.255.255.0`
    * Router gives out DHCP range: `192.168.100-200`
    
    If your network range is different, that's fine, use your network range instead of mine.
    
    I'm going to give my Raspberry Pi a static IP address of `192.168.1.2` by configuring `/etc/network/interfaces` like so:
    
    ~~~
    auto lo
    iface lo inet loopback
    
    auto eth0
    allow-hotplug eth0
    iface eth0 inet static
        address 192.168.1.2
        netmask 255.255.255.0
        gateway 192.168.1.1
        dns-nameservers 8.8.8.8 8.8.4.4
    ~~~
    
    You can use WiFi if you like, there are plenty tutorials around the internet for setting that up, but this should do:
    
    ~~~
    auto lo
    iface lo inet loopback
    
    auto eth0
    allow-hotplug eth0
    iface eth0 inet manual
    
    auto wlan0
    allow-hotplug wlan0
    iface wlan0 inet static
        wpa-ssid "Your SSID"
        wpa-psk  "Your Password"
        address 192.168.1.2
        netmask 255.255.255.0
        gateway 192.168.1.1
        dns-nameservers 8.8.8.8 8.8.4.4
    ~~~
    
    You only need one connection into your local network, don't connect both Ethernet and WiFi. I recommend Ethernet if possible.
    
    ## NTP
    
    Accurate time is important for the VPN encryption to work. If the VPN client's clock is too far off, the VPN server will reject the client.
    
    You shouldn't have to do anything to set this up, the `ntp` service is installed and enabled by default.
    
    Double-check your Pi is getting the correct time from internet time servers with `ntpq -p`, you should see at least one peer with a `+` or a `*` or an `o`, for example:
    
    ~~~
    $ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    -0.time.xxxx.com 104.21.137.30    2 u   47   64    3  240.416    0.366   0.239
    +node01.jp.xxxxx 226.252.532.9    2 u   39   64    7  241.030   -3.071   0.852
    *t.time.xxxx.net 104.1.306.769    2 u   38   64    7  127.126   -2.728   0.514
    +node02.jp.xxxxx 250.9.592.830    2 u    8   64   17  241.212   -4.784   1.398
    ~~~
    
    ## Setup VPN Client
    
    Install the OpenVPN client:
    
    ~~~
    sudo apt-get install openvpn
    ~~~
    
    Download and uncompress the PIA OpenVPN profiles:
    
    ~~~
    wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
    sudo apt-get install unzip
    unzip openvpn.zip -d openvpn
    ~~~
    
    Copy the PIA OpenVPN certificates and profile to the OpenVPN client:
    
    ~~~
    sudo cp openvpn/ca.rsa.2048.crt openvpn/crl.rsa.2048.pem /etc/openvpn/
    sudo cp openvpn/Japan.ovpn /etc/openvpn/Japan.conf
    ~~~
    
    You can use a diffrent VPN endpoint if you like. Note the extension change from **ovpn** to **conf**.
    
    Create `/etc/openvpn/login` containing only your username and password, one per line, for example:
    
    ~~~
    user12345678
    MyGreatPassword
    ~~~
    
    Change the permissions on this file so only the root user can read it:
    
    ~~~
    sudo chmod 600 /etc/openvpn/login
    ~~~
    
    Setup OpenVPN to use your stored username and password by editing the the config file for the VPN endpoint:
    
    ~~~
    sudo nano /etc/openvpn/Japan.conf
    ~~~
    
    Change the following lines so they go from this:
    
    ~~~
    ca ca.rsa.2048.crt
    auth-user-pass
    crl-verify crl.rsa.2048.pem
    ~~~
    
    To this:
    
    ~~~
    ca /etc/openvpn/ca.rsa.2048.crt
    auth-user-pass /etc/openvpn/login
    crl-verify /etc/openvpn/crl.rsa.2048.pem
    ~~~
    
    ## Test VPN
    
    At this point you should be able to test the VPN actually works:
    
    ~~~
    sudo openvpn --config /etc/openvpn/Japan.conf
    ~~~
    
    If all is well, you'll see something like:
    
    ~~~
    $ sudo openvpn --config /etc/openvpn/Japan.conf 
    Sat Oct 24 12:10:54 2015 OpenVPN 2.3.4 arm-unknown-linux-gnueabihf [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Dec  5 2014
    Sat Oct 24 12:10:54 2015 library versions: OpenSSL 1.0.1k 8 Jan 2015, LZO 2.08
    Sat Oct 24 12:10:54 2015 UDPv4 link local: [undef]
    Sat Oct 24 12:10:54 2015 UDPv4 link remote: [AF_INET]123.123.123.123:1194
    Sat Oct 24 12:10:54 2015 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
    Sat Oct 24 12:10:56 2015 [Private Internet Access] Peer Connection Initiated with [AF_INET]123.123.123.123:1194
    Sat Oct 24 12:10:58 2015 TUN/TAP device tun0 opened
    Sat Oct 24 12:10:58 2015 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
    Sat Oct 24 12:10:58 2015 /sbin/ip link set dev tun0 up mtu 1500
    Sat Oct 24 12:10:58 2015 /sbin/ip addr add dev tun0 local 10.10.10.6 peer 10.10.10.5
    Sat Oct 24 12:10:59 2015 Initialization Sequence Completed
    ~~~
    
    Exit this with **Ctrl+c**
    
    ## Enable VPN at boot
    
    ~~~
    sudo systemctl enable openvpn@Japan
    ~~~
    
    ## Setup Routing and NAT
    
    Enable IP Forwarding:
    
    ~~~
    echo -e '\n#Enable IP Routing\nnet.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p
    ~~~
    
    Setup NAT fron the local LAN down the VPN tunnel:
    
    ~~~
    sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
    sudo iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
    ~~~
    
    Make the NAT rules persistent across reboot:
    
    ~~~
    sudo apt-get install iptables-persistent
    ~~~
    
    The installer will ask if you want to save current rules, select **Yes**
    
    If you don't select yes, that's fine, you can save the rules later with `sudo netfilter-persistent save`
    
    Make the rules apply at startup:
    
    ~~~
    sudo systemctl enable netfilter-persistent
    ~~~
    
    ## VPN Kill Switch
    
    This will block outbound traffic from the Pi so that only the VPN and related services are allowed.
    
    Once this is done, the only way the Pi can get to the internet is over the VPN.
    
    This means if the VPN goes down, your traffic will just stop working, rather than end up routing over your regular internet connection where it could become visible.
    
    ~~~
    sudo iptables -A OUTPUT -o tun0 -m comment --comment "vpn" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p icmp -m comment --comment "icmp" -j ACCEPT
    sudo iptables -A OUTPUT -d 192.168.1.0/24 -o eth0 -m comment --comment "lan" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 1198 -m comment --comment "openvpn" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p tcp -m tcp --sport 22 -m comment --comment "ssh" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 123 -m comment --comment "ntp" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p udp -m udp --dport 53 -m comment --comment "dns" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -p tcp -m tcp --dport 53 -m comment --comment "dns" -j ACCEPT
    sudo iptables -A OUTPUT -o eth0 -j DROP
    ~~~
    
    And save so they apply at reboot:
    
    ~~~
    sudo netfilter-persistent save
    ~~~
    
    If you find traffic on your other systems stops, then look on the Pi to see if the VPN is up or not.
    
    You can check the status and logs of the VPN client with:
    
    ~~~
    sudo systemctl status openvpn@Japan
    sudo journalctl -u openvpn@Japan
    ~~~
    
    ## Configure Other Systems on the LAN
    
    Now we're ready to tell other systems to send their traffic through the Raspberry Pi.
    
    Configure other systems' network so they are like:
    
    * Default Gateway: Pi's static IP address (eg: `192.168.1.2`)
    * DNS: Something public like Google DNS (`8.8.8.8` and `8.8.4.4`)
    
    Don't use your existing internet router (eg: `192.168.1.1`) as DNS, or your DNS queries will be visible to your ISP and hence may be visible to organizations who wish to see your internet traffic.
    
    ## Optional: DNS on the Pi
    
    To ensure all your DNS goes through the VPN, you could install `dnsmasq` on the Pi to accept DNS requests from the local LAN and forward requests to external DNS servers.
    
    ~~~
    sudo apt-get install dnsmasq
    ~~~
    
    You may now configure the other systems on the LAN to use the Pi (`192.168.1.2`) as their DNS server as well as their gateway.
    
    

    public by snip2code modified Aug 13, 2017  51  0  2  0

    First Snippet: How to play with Snip2Code

    This is the first example of a snippet: - the title represents in few words which is the exact issue the snippet resolves; it can be something like the name of a method; - the description (this field) is an optional field where you can add interesting information regarding the snippet; something like the comment on the head of a method; - the c
    /* place here the actual content of your snippet. 
       It should be code or pseudo-code. 
       The less dependencies from external stuff, the better! */

    public by Phalacrocorax modified Feb 25, 2017  38  0  1  0

    OTHER_THAN_CODE

    OTHER_THAN_CODE: OTHER_THAN_CODE.md
    [10个鼓舞人心的女性TED演讲](http://www.egouz.com/yuedu/topics/1362.html)
    
    

    public by Phalacrocorax modified Feb 17, 2017  49  0  1  0

    TOOLS DESCRIPTION

    TOOLS DESCRIPTION: dev_tools.md
    # JS 从入门到放弃
    ### 叨叨叨=babel?
    前端的发展完全跟不上。 从gulp, sass, webpack, AMD, CommonJS, vuejs, vuex, es2015, babel, transpile 各种概念层出不穷。
    本来npm这些nodejs之流就没怎么弄懂
    但是[12条专业的JavaScript规则 | kancloud](http://www.kancloud.cn/thinkphp/rules-for-professional-javascript/69044)中有提到JS应该自动构建。
    ## transpile
    [JavaScript Transpilers: What They Are & Why We Need Them](https://scotch.io/tutorials/javascript-transpilers-what-they-are-why-we-need-them)
    
    

    public by Phalacrocorax modified Feb 15, 2017  69  0  1  0

    JOB | INTERVIEWE | FRONTEND

    JOB | INTERVIEWE | FRONTEND: JOB-FRONTEND.md
    Come From [如何面试web前端开发工程师](http://blog.csdn.net/zhut_acm/aricle/details/44944831)
    ## BASEMENT
    应该独立掌握的知识
    
    - DOM结构--两个节点之间可能存在哪些关系以及如何在节点之间任意移动 
    - DOM操作--怎样添加、移除、移动、复制、创建和查找节点
    - 事件--怎样使用事件以及IE和DOM事件模型之间存在哪些主要差别
    - XTMLHttpRequest--这是什么、怎样完整的执行一次GET请求、怎样检测错误
    - 严格模式与混杂模式--如何触发这两种模式,区分它们有何意义
    - 盒模型--外边距、内边距和边框之间的关系,IE8以下版本的浏览器中的盒模型有什么不同。
    - 块级元素与行内元素--怎么CSS控制它们、它们怎样影响周围的元素以及你觉得应该如何定义它们的样式。
    - 浮动元素--怎么使用、有什么问题以及解决办法。
    - HTML与XHTML--二者区别,应该使用哪个。
    - JSON--为什么使用它,怎么使用。
    
    ## ANSWER
      W3C : /document-node/element-node/text-node/attribute-node/note-node
    
    - [父子节点,兄弟节点,后代节点,祖先节点](http://www.cnblogs.com/samwu/archive/2012/07/08/2581645.html)
    - #add# appendChild(node),insertBefore()| #delete# removeChild(node) | #search&move# parentNode firstChild children appendChild(node) | #clone# cloneNode(include_all)
    
    
    

    public by YourFriendCaspian modified Oct 25, 2016  53  0  1  0

    Setup LibreOffice Online (Log/Guide) [WIP]

    Setup LibreOffice Online (Log/Guide) [WIP]: log.md
    # Setup LibreOffice Online (Log/Guide) [WIP]
    
    ## About
    
    This guide/log is based off my experience attempting to build and install
    LibreOffice Online and it's dependencies on my system.
    
    The end goal is to get LibreOffice Online integrated with [Karoshi Server](https://github.com/the-linux-schools-project/karoshi-server).
    
    LibreOffice Online is still in development (17/06/16).
    
    Each part is labeled with a number, if steps are labeled with the same number then they are an alternate method.
    
    [LibreOffice Online Wiki Page](https://wiki.documentfoundation.org/Development/LibreOffice_Online)
    
    Distro used in this guide: Xubuntu 16.04 amd64
    
    I'll be updating this with progress as I go along, feel free to comment if you have any suggestions or feedback.
    
    There are 2 parts of this guide as initially the plan was to build and install LibreOffice Online from source and then use it with LibreOnline-ownCloud to provide a way for server users to save and modify there work in Owncloud.
    However LibreOnline-ownCloud is now classed as "obsolete" with richdocuments taking it's place and therefore I went with this option as it seemed more stable and is being updated with the latest from LibreOffice Online.
    
    # Build and Install richdocuments in Owncloud.
    
    In this guide I am using Karoshi Server V11 (160810-1116) with the Owncloud 9.0.1 module installed on the main domain controller.
    
    This requires a built and working version of LibreOffice Online loolwsd and loleaflet. See below for a guide/log.
    
    Install git which is required to clone the repositiory.
    
    `sudo apt-get install git`
    
    Clone the latest version of the richdocuments repository.
    
    `git clone https://github.com/owncloud/richdocuments.git`
    
    Enter the richdocuments directory.
    
    `cd richdocuments`
    
    Build a tarball using the makefile inside.
    
    `make dist`
    
    Extract the contents of the tarball in your server's Owncloud app folder. This is located at owncloud/apps.
    
    For example on my system it is found under /var/www/html/owncloud/apps.
    
    This may need to be run as root (or another superuser), depending on the folder permissions configured.
    
    ```
    tar -xzf *.tar.gz -C /var/www/html/owncloud/apps
    mv /var/www/html/owncloud/apps/owncloud-collabora-online-x.x.x /var/www/html/owncloud/apps/richdocuments
    ```
    (With x.x.x representing the version of richdocuments you have cloned.)
    
    Now there should be a folder named richdocuments within the apps folder.
    
    Install the dependencies required for memcache.
    
    `sudo apt-get install php-apcu php-memcache`
    
    You may need to restart apache to enable the memcache module in php.
    
    `sudo service apache2 restart`
    
    Add `'memcache.local' => '\OC\Memcache\APCu',` to owncloud/config/config.php.
    
    For example on my system it is found under /var/www/html/owncloud/config/config.php.
    
    This may need to be done as root (or another superuser), depending on the file permissions configured.
    
    ```
    sed -i '$ d' /var/www/html/owncloud/config/config.php
    echo "  'memcache.local' => '\OC\Memcache\APCu'," >> /var/www/html/owncloud/config/config.php
    echo ");" >> /var/www/html/owncloud/config/config.php
    ```
    
    The following steps require access to the occ (Owncloud Console). This is executed by your HTTP user.
    
    For example on my system the console can be accessed by running:
    
    `sudo -u www-data php /var/www/html/owncloud/occ`
    Followed by your arguments.
    
    www-data is the HTTP user.
    php links to php, you may want to swap this out for /opt/rh/php54/root/usr/bin/php if php is not found.
    /var/www/html/owncloud/occ is the path to the owncloud folder using containing the php script occ.
    
    `sudo -u www-data php /var/www/html/owncloud/occ config:system:set --value='\OC\Memcache\APCu' memcache.local`
    
    If successful this will return: `System config value memcache.local set to string \OC\Memcache\APCu`
    
    Then enable the richdocuments app.
    
    `sudo -u www-data php /var/www/html/owncloud/occ app:enable richdocuments`
    
    If successful this will return: `richdocuments enabled`.
    
    There should now be a Collabra Online section under Admin in Owncloud, leave this for now.
    
    You configure the WOPI Client URL (where the LibreOffice Online loolwsd is listening). For example I will configure it to https://192.168.210:9980 as this I'm using a test VM without a qualified domain name.
    
    The default port is 9980.
    
    `sudo -u www-data php /var/www/html/owncloud/occ config:app:set --value='https://localhost:9980' richdocuments wopi_url`
    
    Move the ca-chain certificate into the OwnCloud ca-bundle if required (as root).:
    `sudo cat /opt/karoshi/karoshi_user/online/loolwsd/etc/ca-chain.cert.pem >> /var/www/html/owncloud/resources/config/ca-bundle.crt`
    
    # Build and Install LibreOffice Online from source
    
    ## 1. Introduction
    
    Install git and wget.
    
    `sudo apt-get install git wget`
    
    Clone the LibreOffice Online repository from Github.
    
    `git clone https://github.com/LibreOffice/online.git`
    
    ~~Enter the directory~~
    
    ~~`cd online`~~
    
    ~~Revert the repository back to the commit marking the most current tagged release on Github.~~
    
    ~~This is optional, however I was having issues compiling the latest master branch at commit c47c4fe5a487dd249c4e0a67b25a7a419c732a84 .~~
    
    ~~This guide is currently based off 1.7.2-1, with the latest commit of that tag being: aedd02a210498578444a6b2f4abb84eada012c7f .~~
    
    ~~`git checkout aedd02a210498578444a6b2f4abb84eada012c7f`~~
    
    ~~We can check that this was successful by checking the current commit HEAD is at.~~
    
    ~~`git log -1`~~
    
    ~~Move up a directory~~
    
    ~~`cd ../`~~
    
    (The master branch at: commit 2757adc3c69ce345a9ba8a82166d75665b7e1ef1 compiles fine.)
    
    ## 2. Building LibreOffice On-Line WebSocket server (loolwsd)
    
    ### Building/Installing the POCO Library
    
    #### 3. Install using Ubuntu package (broken for me)
    
    Add the collaboraoffice PPA for POCO to /etc/apt/sources.list
    
    `deb https://www.collaboraoffice.com/apt-poco/ /`
    
    Update the sources list.
    
    `sudo apt-get update`
    
    It seems that the GPG key to verify the PPA wasn't installed in my keychain.
    
    ```
    Get:5 https://www.collaboraoffice.com/apt-poco  InRelease [1,726 B]
    Ign:5 https://www.collaboraoffice.com/apt-poco  InRelease
    Fetched 1,726 B in 0s (2,668 B/s)
    Reading package lists... Done
    W: GPG error: https://www.collaboraoffice.com/apt-poco  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0C54D189F4BA284D
    W: The repository 'https://www.collaboraoffice.com/apt-poco  InRelease' is not signed.
    N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
    N: See apt-secure(8) manpage for repository creation and user configuration details.
    W: There is no public key available for the following key IDs:
    0C54D189F4BA284D
    ```
    
    Install the GPG public key.
    
    `sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0C54D189F4BA284D`
    
    Attempt to update the sources again.
    
    `sudo apt-get update`
    
    ```
    Reading package lists... Done
    W: https://www.collaboraoffice.com/apt-poco/InRelease: Signature by key 6CCEA47B2281732DF5D504D00C54D189F4BA284D uses weak digest algorithm (SHA1)
    ```
    
    It seems that InRelease within the PPA defaults to use SHA1, which is now deprecated and rejected by Apt.
    
    ```
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1
    ```
    
    Therefore, I went on to build from source instead.
    
    #### 3. Building POCO from source
    
    Install the dependencies required to compile the source.
    
    `sudo apt-get install openssl g++ libssl-dev`
    
    Download the source.
    
    `wget http://pocoproject.org/releases/poco-1.7.4/poco-1.7.4-all.tar.gz`
    
    Make a directory to store the decompressed files.
    
    `mkdir poco`
    
    Uncompress the source files into the directory.
    
    `tar -xv -C poco -f poco-1.7.4-all.tar.gz`
    
    Change directory into the source folder.
    
    `cd poco/poco-1.7.4-all`
    
    Switch to superuser before building the source and installing
    (script requires access to /opt/poco).
    
    `sudo su`
    
    Compile and install the POCO libraries to /opt/poco.
    
    ```
    ./configure --prefix=/opt/poco
    make install
    ```
    
    Switch back into an unpriviledged user (not root).
    
    `su <username>`
    
    The Poco libary should by found in /opt/poco, we will need to reference this folder later.
    
    Move back up to the starting folder.
    
    `cd ../..`
    
    ## 4. Downloading/Compiling LibreOffice Core
    
    Clone the master branch of the GitHub repository.
    
    `git clone -b master --single-branch https://github.com/LibreOffice/core.git`
    
    Change directory into the source folder.
    
    `cd core`
    
    Reset the files back to the latest commit in case of accidental changes or formatting.
    
    `git reset --hard`
    
    Before compiling from source we need to install the LibreOffice required dependencies.
    
    First, make sure that the `deb-src` for main is uncommented in the sources list
    (/etc/apt/sources.list).
    
    For example on Xubuntu Xenial (16.04):
    
    `deb-src http://gb.archive.ubuntu.com/ubuntu/ xenial main restricted`
    
    Update the sources.
    
    `sudo apt-get update`
    
    Install the dependencies from the Ubuntu main PPA.
    
    `sudo apt-get build-dep libreoffice`
    
    The package libkrb5-dev seems to be not included, so we will
    install it separately.
    
    `sudo apt-get install libkrb5-dev`
    
    Now run autogen.sh in preparation of building the source.
    
    `./autogen.sh`
    
    Build LibreOffice core.
    
    `make`
    
    - If the makefile returns the error below.
    
    ```
    root@<hostname>:~/core# make
    
    No. You make ME a sandwich.
    
    Makefile:58: recipe for target 'check-if-root' failed
    make: *** [check-if-root] Error 1
    ```
    
    - Move up to the directory containing the core folder.
    
    `cd ../`
    
     - Recursively change the core folder and it's contents to be owned by your user.
    
    `sudo chown -R username:users core`
    
    - Move back into core.
    
    `cd core`
    
    - Build the source again, but with errors ignored.
    
    ```
    root@<hostname>:~/core# make -i
    
    No. You make ME a sandwich.
    
    Makefile:58: recipe for target 'check-if-root' failed
    make: [check-if-root] Error 1 (ignored)
    ```
    - The error should be ignored and the makefile will continue as normal.
    
    ### 5. Building and Installing LibreOffice On-Line WebSocket server (loolwsd)
    
    Move back into the online/loolwsd directory.
    
    `cd ../online/loolwsd`
    
    Install/Update the dependencies needed to build.
    
    `sudo apt-get install -y libpng12-dev libcap-dev libtool m4 automake`
    
    Set the path name to the core folder as the variable $MASTER.
    
    Make sure that it is not followed by a slash and is enclosed by quote marks,
    especially if your path name contains spaces.
    
    For example on my test system:
    `MASTER="/opt/karoshi/karoshi_user/core"`
    
    Then set the variables used for running loolwsd with loleaflet while we are in the loolwsd folder.
    
    `$(pwd)` is the current directory path.
    
    ```
    SYSTEMPLATE=$(pwd)/systemplate
    ROOTFORJAILS=$(pwd)/jails
    ```
    
    Install the required c++ libraries.
    
    `sudo apt-get install -y libcppunit-dev libcppunit-doc pkg-config`
    
    Run autogen.sh in preparation of building the source, this should generate configure.
    
    `./autogen.sh`
    
    Run the configure script in preparation for building loolwsd.
    
    `./configure --enable-silent-rules --with-lokit-path=${MASTER}/include --with-lo-path=${MASTER}/instdir --enable-debug --with-poco-includes=/opt/poco/include --with-poco-libs=/opt/poco/lib`
    
    Build loolwsd.
    
    `make` or `/usr/bin/make`
    
    Create the directory used for caching tiles as set in configure.ac.
    
    If you did not pass a prefix changing this when running the configure script for loolwsd, the folder should be /usr/local/var/cache/loolwsd.
    
    `mkdir -p /usr/local/var/cache/loolwsd`
    
    Then change the owner of this folder to the current user (if it is not already).
    
    `sudo chown username /usr/local/var/cache/loolwsd`
    
    For some reason loolwsd looks in /etc/loolwsd for the self generated ssl certificates it requires.
    Therefore I created the folder and copied the certificates into it (as root).
    
    ```
    sudo mkdir -p /etc/loolwsd
    sudo cp /opt/karoshi/karoshi_user/online/loolwsd/etc/cert.pem /etc/loolwsd/cert.pem
    sudo cp /opt/karoshi/karoshi_user/online/loolwsd/etc/key.pem /etc/loolwsd/cert/key.pem
    sudo cp /opt/karoshi/karoshi_user/online/loolwsd/etc/ca-chain.cert.pem /etc/loolwsd/cert/ca-chain.cert.pem
    ```
    Now loolwsd should run without SSL errors.
    
    Then run loolwsd as an unprivilledged user (not root).
    
    `make run`
    
    You have to kill loolwsd by hitting CTRL+C before you install loleaflet.
    
    ## 5. Building Leaflet platform for LibreOffice On-Line (loleaflet)
    
    I have not tested this extensively so you may find some errors which I have not included a fix here.
    
    Please check the README file found in online/loleaflet as some common errors are addressed there. This is also linked in the Notes section.
    
    Enter the loleaflet directory (from loolwsd).
    
    `cd ../loleaflet`
    
    Install npm (from node.js) if not installed already.
    
    `sudo apt-get install npm nodejs`
    
    Install dependencies needed to build loleaflet through npm.
    
    `npm install -g jake`
    
    Check whether npm is at least version 3.0.
    
    `npm -v`
    
    If not update npm.
    
    `npm install -g npm`
    
    Create a symbolic link for node.js as the makefile looks for node.js in /usr/bin/node.
    
    `sudo ln -s /usr/bin/nodejs /usr/bin/node`
    
    Build loleaflet, make sure you defined the variables SYSTEMPLATE, MASTER and ROOTFORJAILS from the loolwsd part of this log/guide.
    
    `make`
    
    To run loolwsd with loleaflet use:
    
    `./loolwsd --o:sys_template_path=${SYSTEMPLATE} --o:lo_template_path=${MASTER}/instdir --o:child_root_path=${ROOTFORJAILS}`
    
    You should now be able to access files within the browser under the URL (this does not include local files):
    
    `https://localhost:9980/loleaflet/dist/loleaflet.html?file_path=file:///PATH/TO_DOC&host=wss://localhost:9980`
    
    To access the admin panel go to:
    
    `https://localhost:9980/loleaflet/dist/admin/admin.html`
    
    When accessing the site you may be asked to trust a certificate if you are using a self-signed certificate on your server without a valid certificate authority.
    
    -TODO/STILL TESTING-
    
    # Integrate LibreOnline-ownCloud with LibreOffice Online ("obsolete")
    
    LibreOnline-ownCloud is now obsolete and is succeeded by richdocuments, check out the log/guide above and the repository linked under Notes.
    
    This requires you to have built and installed LibreOffice Online from source (see the log/guide above) as well as having Owncloud setup and running on your server.
    
    In this guide I am using Karoshi Server V11 (160810-1116) with the Owncloud 9.0.1 module installed on the main domain controller.
    
    
    ## Notes
    
    ## Sources/ Useful Links
    
    [LibreOffice/Core on Github](https://github.com/LibreOffice/core)
    
    [LibreOffice/Online on Github](https://github.com/LibreOffice/online)
    
    [LibreOnline-ownCloud on Github](https://github.com/COMU/libreonline-owncloud)
    
    [richdocuments on Github](https://github.com/owncloud/richdocuments)
    
    [LibreOffice/Online loolwsd README on Github](https://raw.githubusercontent.com/LibreOffice/online/master/loolwsd/README)
    
    [LibreOffice/Online loleaflet README on Github](https://raw.githubusercontent.com/LibreOffice/online/master/loleaflet/README)
    
    [Building LibreOffice (Core) Guide](https://wiki.documentfoundation.org/Development/BuildingOnLinux)
    
    [POCO C++ Libraries](http://pocoproject.org/)
    
    [Karoshi Server on Github](https://github.com/the-linux-schools-project/karoshi-server)
    
    [Karoshi Server V11 Download on Sourceforge](https://sourceforge.net/projects/karoshi/)
    
    
    

    public by fabior modified Oct 3, 2016  1871  7  3  0

    Quick reference for rewritetoolset

    Quick reference for rewritetoolset: quick-reference.md
    Commands for hypernode
    =====================
    
    __Quick note on command options__
    
    Due to the earlier analysis phase of this set. There are some option embedded which are redundant to some commands. These options will show available, but won't do anything at some newer commands. They are mostly available to analysis and benchmark commands.
    
    _Sometimes redundant options_
    
    * --save (sometimes generates a HTML report)
    * --share-statistics (disabled by default)
    * --log-statistics (mostly generates a JSON file on var/rewrite_tools/stats)
    * --store (mostly available)
    
    ------
    # Getting started real quick
    
    ### 1. Analysis
    
    `magerun rewrites:analysis:totals --store all`
    
    `magerun rewrites:analysis:top --store all`
    
    ### 2. Measuring (optional)
    
    `rewrites:benchmark:resolve-urls --store all`
    
    `magerun rewrites:benchmark:indexer --limit 1`
    
    ### 3. Heavy cleaning
    
    __Safe cleaning__
    
    `rewrites:clean:disabled --store all`
    
    `rewrites:clean:older-than 90` 
    
    __Risky cleaning__
    
    `magerun rewrites:clean:yolo`
    
    ### 4. Permanent fix
    
    `magerun rewrites:fix:products` 
    
    ----------------
    
    more in depth information.
    
    # 1. Analyse the problem
    
    First we need some indication on how big the problem is. 2 commands are very suitable for that:
    
    __Get duplicate totals__
    
    Easier to immediately check for all stores
    
    `magerun rewrites:analysis:totals --store all`
    
    If you're hitting the 1 million of dupes, it's critical. +90% percentages are easily reached.
    @todo explain when to continue
    
    __Get top duplicated products__
    
    We can see which products or categories cause the problem by running the following command.
    
    `magerun rewrites:analysis:top --store all`
    
    If there are a couple of products or categories that cause most of the duplicates, it could be fixed easily and even manually fixing the url keys can be an option over using experimental and complex fix commands.
    
    ## 1-1 What is the impact on loading times
    
    We can measure the impact of the problem by benchmarking.
    There are benchmark commands for indexing times, url resolvement times and site performance. The first 2 are most usable but may take a while if the problem is real.
    
    __URL resolve times__
    
    This basically generates a sitemap. Magento has the process all duplicates to find the actual ones, if your indexes are not up-to-date (probably not), this will take a while. If your hitting the millions of dupes, skip it. it takes too long.
    
    `rewrites:benchmark:resolve-urls --store all`
    
    __note:__ this command also fixes some outdated indexes, the second time will finish almost instant due to up to date indexes.
    
    __Indexer times__
    
    The following command runs a full reindex of the _catalog_url_ index and outputs the runtime.
    This one is set to 10 runs by default, but if you just want to know the time, set the limit to 1. That is enough information on how long it runs, and how money duplicates are created on each run.
    
    `magerun rewrites:benchmark:indexer --limit 1`
    
    # 2. Decide the next course of action
    
    Depending on earlier results, we know the impact and scale of the problem.
    Another thing we need to know, is the store's state. Is it a new shop, high volume / traffic show, SEO scores etc.
    The hard part about fixing this problem is maintaining SEO score, while guaranteeing uptime's.
    
    Ask yourself these questions:
    
    _Is it a problem to lose all SEO scores?_
    
    * Yes: Start with building the whitelist
    * No: Lucky you
    
    _Is it a problem to go offline or respond really slow for a relative amount of time? (tens of minutes to maybe hours)_
    
    * Yes: Continue with a test  / development setup
    * No: Again, lucky you
    
    _Was there enough time to create a fresh can of coffee while running the analysis commands?_
    
    * Yes: Start with some heavy cleaning
    * No: Go right up to fixing
    
    ## Do some heavy cleaning
    
    In some cases, the indexes are so clogged that it's barely workable. You probably have tons of rewrites which aren't used anyway. We have commands to clean those out.
    
    __Important note__
    
    These commands have a dry-run options (--dry-run). Use them to check the response first.
    
    __Disabled products and store views__
    
    Most stores have store views and or products which are disabled. We don't need those rewrites. Although they will be recreated if we don't fix them. Wiping them out makes the database somewhat more workable. 
    
    `rewrites:clean:disabled --store all`
    
    __Old rewrites__
    
    The clean:older-than command removes all rewrites which are older than x days from now. Most stores with big rewrite problems are building them up for several years. After a couple of months, the old ones shouldn't be indexed by google anymore. There's no mechanism to check for indexed URL's yet, but this is possible if this toolset proves worthy. The whitelist commands have options to whitelist url's by CSV (google analytics), access logs or the visitor log with another time x days option. They are not available to this clean:older-than command, yet.
    
    Run the following command:
    
    `rewrites:clean:older-than days` 
    
    where days is the number of days in the past you want to preserve rewrites for. The --store option is available, but i prefer to set it to --store all
    The --dry-run options works too.
    
    __Everything__
    
    There are some scenario's where a shop lost all their scores, income, and everything is pretty messed up and clogged etc. When there's nothing you need to preserve, it's time to start fresh.
    The clean yolo command _removes every duplicate rewrite_ without checking other things like times, whitelists, disabled statuses or whatsoever. You still have to fix the duplicate keys but the store wont be blocking that much. You only live once right, cliches are there for the ones who have nothing to care about.
    
    `magerun rewrites:clean:yolo`
    
    This command has the --dry-run options available
    
    ## Permanent fixing
    
    The whitelist commands are not yet integrated with permanent fixing. therefore only 1 permanent fix command is available. But i's a general good fix.
    
    `magerun rewrites:fix:products` 
    
    * Start with the `--dry-run` option
    * Optionally specify a different suffix if urls seem out of shape `--new-suffix`
    
    ## Building the whitelist (unfinished)
    
    In this stage, it is required to keep your SEO scores. We can make this more safe by creating a whitelist of rewrite urls by adding URLs from different sources. The goal is to whitelist URL's that we're recently visited and are probably indexed by google. Sources are Google analytics ar any other website statistics tool or service (CSV), Magento's visitor log table and server acces logs. 
    
    __note__ Building whitelists is mostly finished, using them not yet. Testing the whitelist build-up would be appreciated. The implementation does not require a lot of time and will be done a.s.a.p.
    
    Start by adding URL's from sources available to you.
    
    ### Adding sources to the whitelist databases
    
    We use JSON based database for this. Each command builds a seperate JSON database which later on will be processed into a master whitelist. This master whitelist is then used to backup rewrites or ignore the removal of rewrites.
    
    #### From access logs
    
    This command currently only works for hypernode. Parsing all those access logs requires quite a lot of memory. The command is aware of it's memory usage and will automatically trigger garbage collecting to lower it's memory usage. Problems could come up in high traffic stores.
    
    `magerun   rewrites:log:parse --file="/var/log/nginx/access.log*"`
    
    #### From visitor logs
    
    This command processer Magento's visitor log table into a whitelist database. Rewrites older than x days will be filtered out. By default this is set to 60 days.
    
    ` magerun  rewrites:url:visitor --max-age 90`
    
    #### From CSV
    
    This is the best and most safe option of preserving Google SEO scores as CSV from Google analytics can be added to the whitelist. Specify a path to the CSV and a column to take URL's from.
    
    `magerun rewrites:url:csv --csv path/to/csv.csv --column urlcolumn`
    
    ### Building the whitelist
    
    When all whitelist sources are converted to whitelist json databases, it is time to process these into one master whitelist. All sources are combined and cut up into segments. For example:
    
    _url_ : some_product_url_duplicatevalue.html
    _segment_ : some_product_url
    
    Each segment is then queried against the rewrites database and each result of this segment query is filtered on a max-age and matched back against the combined sources of whitelists. _each match_ is then added to the master whitelist. This dramatically reduces the size of the master whitelist and load on the database. The extra filter makes sure there are no redundant rewrites added to the master list. If all sources are added correctly, this is a strong safeguard on maintaining valuable URL's. The master whitelist will be used to backup everything in the future so rewrites could be wiped and recreated after fixing the keys. Unfortunately, this is a work in progress. But testing the buildup is gladly appreciated.
    
    This concept is a complex one, i'm happy to explain it in-dept.
    
    __Usage__
    
    `magerun rewrites:url:whitelist`
    
    
    
    • Public Snippets
    • Channels Snippets