GNU Screen Configuration and User Guide

If you’re a command-line user, there’s a good chance you’ve come across the best way to handle multiple sessions through a terminal, screen!

It’s a simple, powerful tool that you can get set up using either

$ sudo apt install screen


$ brew install screen

but it’s a little cryptic to use! First, I’d suggest setting it up so you can make it look nicer and be a bit more informative.

In your home directory (~/) create (or edit) your .screenrc file. In it, put the following configurations:

startup_message off
attrcolor b ".I"
term xterm-256color
defscrollback 30000
hardstatus alwayslastline
hardstatus string '%H %= %t:Screen %n%f %= %Y-%m-%d %c'

This will allow for things like bold text, colors in the terminal, a very long scrollback and an informative status bar across the bottom of the screen.

Now just run

$ screen

in your terminal and you’ll jump into a created screen! But why bother with this when logging in works just as well? Screen has a ton of useful features.

First, let’s go with the basics. Press Ctrl a and then c to create and jump to a new screen. If you’re using the screen configuration we set earlier, the only change you’ll see right away is that the Screen 1 at the bottom is being displayed instead of Screen 0. To move between these screens, type

Ctrl a n to move to the next screen (it cycles around) or

Ctrl a p to move to the previous screen.

You can give each of these screens it’s own title to make it easier to notice what screen you might be looking at.

Ctrl a Shift A will open a prompt “Set window title to:” and will allow you to edit the title text there.

Screen is also able to run in the background. This is useful if you want to keep the same session on a remote machine but log in from different locations.

Ctrl a d will put the current screen session into a “detached” state. The processes inside that screen session will continue to run (poor-man’s service) and you can see what sessions are running by typing

$ screen -ls

If there is only one detached session, you can easily run

$ screen -r and immediately reattach to that screen session. If there are multiple detached screens, you’ll have to specify the screen session name or screen id.

There is a screen on:
29529.ttys002.LAPTOP-1083 (Detached)

$ screen -r 29529

Screen has a lot more capabilities than what I covered here, but you can see a list of all of the commands and shortcuts while inside of a screen session if you type

Ctrl a : and then help

Command key:  ^A   Literal ^A:  a
   break       ^B b
   clear       C
   colon       :
   copy        ^[ [
   detach      ^D d
   digraph     ^V
   displays    *
   dumptermcap .
   fit         F
   flow        ^F f
   focus       ^I
   hardcopy    h   
   help        ?
   history     { }
   info        i
   kill        ^K k
   lastmsg     ^M m
   license     ,
   lockscreen  ^X x
   log         H
   meta        a
   monitor     M
   next        ^@ ^N sp n
   number      N
   only        Q
   other       ^A
   pow_break   B
   pow_detach  D
   prev        ^H ^P p ^?
   quit        ^\
   readbuf     <
   redisplay   ^L l
   remove      X
   removebuf   =
   reset       Z
   screen      ^C c
   select      '
   silence     _
   split       S
   suspend     ^Z z
   time        ^T t
   title       A
   vbell       ^G
   version     v
   width       W
   windows     ^W w
   wrap        ^R r
   writebuf    >
   xoff        ^S s
   xon         ^Q q
   ^]  paste .
   "   windowlist -b
   select -
   0   select 0
   1   select 1
   2   select 2
   3   select 3
   4   select 4
   5   select 5
   6   select 6
   7   select 7
   8   select 8
   9   select 9
   ]    paste . 

Enjoy using screen!

How to set up C# dotnet core development from scratch on Ubuntu 18.04

First things first: you need to install dotnet core on your computer. You can find the SDK download and instructions here:

To allow Aptitude to install Microsoft packages, you’ll have to add the Microsoft Certificate Key. Luckily, Microsoft has provided this in a convenient package.

wget -q
sudo dpkg -i packages-microsoft-prod.deb

Once you have the key, you’ll need to add the repository and HTTPS support.

sudo add-apt-repository universe
sudo apt install apt-transport-https
sudo apt update

Finally, you’re ready to install dotnet core!

sudo apt install dotnet-sdk-2.2

The next thing you’ll need to do is set up your IDE. you could use vi or nano, but to make things consistent and simple, I use VS Code. You’ll have to download the .deb package from here:

Direct Link:


Then you just need to install it! (your version number might be different)

sudo dpkg -i code_1.31.1-1549938243_amd64.deb

Once you’ve installed the editor, run it and add the C# extension. You can find it here:

copy the quick code line

ext install ms-vscode.csharp

and in VS Code press Ctrl+P and paste the line in to install.

Now you’re set up to build dotnet core applications and libraries on Ubuntu! For what you can do with this, please see my post on [How to set up a dotnet core REST API with Dapper and xunit]! (coming soon)

Learning Ansible like a Command Line addict (Part 2)

Please see part 1

Everything I’ve read about Ansible talks about these big fancy “playbooks” but all I know how to do is run a ping from the command line to check if my servers exist. Do I have to copy an existing playbook and hope it does what I want?

Yea, forget that! Let’s try running something simple, like a file copy. Remember, I set this up as my jenkins user, but whatever user you’re using, log in as them to do the next parts.

First, we need a file to copy. I started with a dummy file in my home directory, hello.txt and some text inside “hello ansible!” So to run a copy command from ansible, you have to type:

ansible -m copy -a "src=./hello.txt dest=./" webservers

And you get a ton of awesome output!

jenkins@deploy:~$ ansible -m copy -a "src=./hello.txt dest=./" webservers
web02 | SUCCESS => {
 "changed": true, 
 "checksum": "af53119865a6f135b4ab851cb2b580cdc8e9f075", 
 "dest": "./hello.txt", 
 "gid": 1001, 
 "group": "deploy", 
 "md5sum": "0b2f7c2ec4095f9b8466c0913f1af3f3", 
 "mode": "0644", 
 "owner": "deploy", 
 "size": 15, 
 "src": "/home/deploy/.ansible/tmp/ansible-tmp-1497728808.43-244137364817946/source", 
 "state": "file", 
 "uid": 1001
web01 | SUCCESS => {
 "changed": true, 
 "checksum": "af53119865a6f135b4ab851cb2b580cdc8e9f075", 
 "dest": "./hello.txt", 
 "gid": 1001, 
 "group": "deploy", 
 "md5sum": "0b2f7c2ec4095f9b8466c0913f1af3f3", 
 "mode": "0644", 
 "owner": "deploy", 
 "size": 15, 
 "src": "/home/deploy/.ansible/tmp/ansible-tmp-1497728808.42-136193633452143/source", 
 "state": "file", 
 "uid": 1001

You’ll see on each of the webservers that there is now a hello.txt file in the deploy user’s home directory. Sweet! But a copy command is pretty slow when you’re dealing with a website’s worth of files. Maybe we can use rsync to speed this process up a bit?

NOTE: I did not have rsync on any of the servers in question here, so stuff blew up and I had to install it. sudo apt-get install rsync before you suffer the same fate as me!

To test, create a directory of project files (test-directory was mine) and move the hello.txt into it. I created a few other files in there too just to see how it worked. Then just run

ansible -m synchronize -a "src=./test-directory/ dest=./ archive=yes rsync_opts=--exclude=.git" webservers

And this will use rsync to copy the contents of the test-directory into the home directory of the deploy user on each of the servers.

Now we have a pretty decent way to deploy versioned releases of the code to our webservers. However, we’ll need a way to tell our webserver where the new directory is. I use symlinks which makes it simple. Just point the Apache or nginx config file at the symlink, and when you do a release you can swap it out. Often times you don’t even need to restart the server.

Here’s the list of the commands I used on my Jenkins CI deploy tool.

ansible -m file -a "path=project1/${BUILD_TAG} state=directory" webservers
ansible -m synchronize -a "src=./ dest=project1/${BUILD_TAG}/ archive=yes rsync_opts=--exclude=.git" webservers
ansible -m file -a "path=project1-live state=absent" webservers
ansible -m file -a "src=project1/${BUILD_TAG} dest=project1-live state=link" webservers

This worked pretty awesome… at least until I deployed 20 times and ran out of disk space. I wasn’t cleaning up any of my old builds! I found a helpful but non-functioning answer on Stack Overflow ( and I hit a wall. I couldn’t figure out how to run “with_items” without building out a playbook.

I struggled for a few days, but it suddenly clicked: I know enough about Ansible now to be able to build a playbook that does exactly what I want.

With those previous commands, I created an identical playbook in the jenkins home directory.

- hosts: webservers
 - name: create versioned directory
 file: path=project1/{{ build_tag }} state=directory

- name: sync files to folder
 synchronize: src=./ dest=project1/{{ build_tag }}/ archive=yes rsync_opts=--exclude.git

- name: delete symlink
 file: path=project1-live state=absent

- name: link new site
 file: src=project1/${{ build_tag }} dest=project1-live state=link

Then in Jenkins I just had to run:

ansible-playbook ${JENKINS_HOME}/project1-playbook.yml --extra-vars="build_tag=${BUILD_TAG}"

And it does exactly the same thing as before!

Now I can make some modifications to this playbook to clean up old deploy versions, and we should be good to go. Here’s my final playbook and Jenkins build command.

Created an “ansible-playbooks” directory in jenkins home, and then added the file ~jenkins/ansible-playbooks/webdeploy-playbook.yml

- hosts: "{{ server }}"
 - name: create versioned directory
 file: path={{ deploy_folder }}/{{ build_tag }} state=directory

- name: sync files to folder
 synchronize: src={{ workspace }} dest={{ deploy_folder }}/{{ build_tag }}/ archive=yes rsync_opts=--exclude=.git

- name: delete symlink
 file: path={{ symlink_name }} state=absent

- name: link new site
 file: src={{ deploy_folder }}/{{ build_tag }} dest={{ symlink_name }} state=link

- name: get list of old releases
 shell: "ls -1r {{ deploy_folder }} | tail -n +{{ releases_to_keep | int + 1 }}"
 register: ls_output

- name: delete old versions
 file: name={{ deploy_folder }}/{{ item }} state=absent
 with_items: "{{ ls_output.stdout_lines }}"

And put this in as the build step for the jenkins project.

ansible-playbook ${JENKINS_HOME}/ansible-playbooks/webdeploy-playbook.yml --extra-vars="build_tag=${BUILD_TAG} workspace=${WORKSPACE} deploy_folder=project1 symlink_name=project1-live server=webservers releases_to_keep=3"

This was a pretty tough way to go about learning it, but hopefully this helps out my fellow command line junkies trying to learn Ansible playbooks.

Learning Ansible like a Command Line addict

I’ve always been hindered by my ability to deploy code using existing tools, and often times I lean too heavily on my ability as a programmer to solve this problem. As a result, I’ve written multiple deployment systems from scratch, and can’t use common tools as a result. So I’ve set out to learn Ansible, but I’m gonna do it in the most backwards way possible.

First, we have to get Ansible ( I have a Jenkins build server that compiles and packages artifacts, so it makes sense that it would be able to deploy those packages when approved. Luckily, Ansible provides a nice Aptitude package repo.

Add the file /etc/apt/sources.list.d/ansible.list​ with the contents:

deb trusty main

Then run these commands:

sudo apt-key adv --keyserver --recv-keys 93C4A3FD7BB9C367
sudo apt-get update
sudo apt-get install ansible

And now you should have Ansible installed. So… now what?

Well, I wanted to use this to deploy to remote servers via SSH. So let’s try adding some servers.

Edit /etc/ansible/hosts and add this at the bottom (obviously change for your server needs)

web01 ansible_ssh_host=
web02 ansible_ssh_host=

And we need to add a username so that we have a deploy user with the appropriate access. Add a file /etc/ansible/group_vars/webservers​ and give it the following contents

ansible_ssh_user: deploy

NOTE: This config file is formatted YAML. Those three dashes at the top are actually pretty critical. Don’t miss them!

This will make all connections use the deploy user, regardless of which user on the deploy server is running the command. Well, that’s cool. So now what? Let’s test it!

ansible -m ping webservers

Should fire off a command to each server listed under the webservers group, and return back if that server responds correctly.

midas@deploy:~$ ansible -m ping webservers
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff.
Are you sure you want to continue connecting (yes/no)? web01 | UNREACHABLE! => {
 "changed": false, 
 "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", 
 "unreachable": true
web02 | UNREACHABLE! => {
 "changed": false, 
 "msg": "Failed to connect to the host via ssh: Warning: Permanently added '' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", 
 "unreachable": true

Oh god what a mess. It looks like we forgot a critical step; adding a user and ssh key to each of the web servers we are trying to deploy to. Since I want my jenkins install to be able to do this, I’ll switch over to the jenkins user and generate a key pair for this account.

sudo su - jenkins
ssh-keygen -t rsa -b 4096
cat ~/.ssh/

Now that I’ve got a key, I need to create a deploy user on each of my target servers and add the jenkins public key to the authorized_keys file. (This step is left as an exercise for the reader). Now we just need to test it, so from the jenkins account, run

ssh deploy@ "echo hello"
ssh deploy@ "echo hello"

and make sure you accept the ECDSA key fingerprint for the first connection. Awesome!

So now from the jenkins account, let’s run that ping command again.

jenkins@deploy:~$ ansible -m ping webservers
web02 | SUCCESS => {
 "changed": false, 
 "ping": "pong"
web01 | SUCCESS => {
 "changed": false, 
 "ping": "pong"

Hooray! But that’s hardly useful, so let’s get cracking at something meaningful. Part two, coming soon!

Update: You can find part 2 here


Tiling web assets (sprites) using montage

I’m updating a few sites to include some social media links, but finding nice icons is tough. There are lots of free icon sets on DeviantArt, and one I found is pretty great!

Unfortunately, the assets all come in separate files. This makes it easy to use right away, but if you want to support all of them, it’s a little inefficient. So I decided to combine the images using a linux tool called montage.

montage -adjoin *.png -background none -size +42+42 -geometry +1+1 montage.png

The images I have are 42px by 42px and transparent. This line will tile the images inside the folder, assuming all the images are +42+42 and leaving a 1px by 1px gap between them.

This is all well and good, now I’ve got a nicely tiled image:

Transparent with white icons. Not easy to see…

But how do I use this? Sprites! In the HTML I want something simple so it’s easy for me to add or adjust the icon.

<a class="social-media wordpress" href=""></a>

Then I just add in a a bit of css styling…

.social-media {
    width: 42px;
    height: 42px;
    display: inline-block;
    background-image: url('../images/social_media_sprite.png');
.social-media.wordpress { background-position: -177px -221px;}

And presto! I can slice out any icon from the spritesheet and add it to my page with a simple classname.

The entire spritesheet and css file can be found here.

Sitecore 8.2 Custom Error Page

I spent a long time googling for an answer to this, and never really found one. I talked to my friend Noel who showed me the way his team implemented a custom error page for their Sitecore site, and I’m documenting it here.

I wanted a single, shared error page that was editable inside of the Sitecore Content Editor that would show up for any 404 or 500 error page (or any error page).

First, create a rich text component for what you want to display on this error page.

Screen Shot 2017-04-24 at 09.02.48

I put this under a “Global Content” section, and copied the item path (which we’ll use later as a data source)

Then in the “Home” section of your site, add a page named “error”.

Screen Shot 2017-04-24 at 09.01.18

Edit it’s presentation details and add a rich text rendering (like the component you created earlier) and set it’s placeholder to ‘content’ and it’s data source to the path of your Rich Text Component.

Screen Shot 2017-04-24 at 09.01.43

Screen Shot 2017-04-24 at 09.02.04

Screen Shot 2017-04-24 at 09.02.17

Save and publish, and you should now have an /error page available on your site.

In the Sitecore configuration files, you’ll have to add or edit some settings.

In /App_Config/Sitecore.config, there are several setting fields that will need to be changed.

<setting name="LayoutNotFoundUrl" value="/error" />
<setting name="LinkItemNotFoundUrl" value="/error" />
<setting name="ItemNotFoundUrl" value="/error" />
<setting name="NoAccessUrl" value="/error" />
<setting name="ErrorPage" value="/error" />

If you require specific error messages for each page, just change these fields to point to whatever page you want inside Sitecore.

It’s that simple! No code changes necessary (like most of the google results I found while trying to learn how to do this)

Using DotNet Core on Ubuntu to build an MVC website

I recently had some time to poke around outside of my normal comfort zone of open-source languages and try some new stuff from the Microsoft side of things. I was pleasantly surprised at how nice it is over the fence!

So the first thing I needed to do was install dotnet[1] and a dotnet-friendly editor[2].

I am on Ubuntu 16.04, so I had to add the repository to my aptitude source list.

sudo sh -c 'echo "deb [arch=amd64] xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver hkp:// --recv-keys 417A0893
sudo apt-get update

It’s a little complicated, but it makes installing and maintaining versions that much easier. To install dotnet core, it was as easy as

sudo apt-get install dotnet-dev-1.0.1

Bam! dotnet core installed! Oh wait. Ok, so things didn’t go perfectly, and I had to do some manual installs to get it actually working.

sudo apt-get -f install
sudo apt-get install liburcu4 liblttng-ust0 dotnet-dev-1.0.1

Next I needed an editor, and I figured Visual Studio Code was the right choice. So I added the repository.

curl | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
sudo sh -c 'echo "deb [arch=amd64] stable main" > /etc/apt/sources.list.d/vscode.list'
sudo apt-get update

Which then made installing it just as simple.

sudo apt-get install code 

The next step was to get the environment set up to run C# application code nicely. So I opened Visual Studio Code and pressed Ctrl+Shift+P and typed ext install and chose to install the C# plugin from Microsoft.

Still with me? That’s it. The environment was really easy to set up. I decided to create a new MVC project from the command line (since I know it a lot better than my new editor) so I created a folder in my ~/src/ directory called the incredibly original dotnet_mvc.

mkdir -p ~/src/dotnet_mvc
cd ~/src/dotnet_mvc

To create a new default MVC project, use

dotnet new mvc

This will create essentially a “Hello World” type web application using Kestrel, a web server for ASP.NET.

The dependencies it has aren’t all automatically included, but a simple

dotnet restore

fixes those issues. (This is also managed by VS Code so if you do it in the IDE, there’s just a friendly popup asking you if you want to run the Restore action.)

Now we can actually start messing around inside the application! In Visual Studio Code, click File > Open Directory and navigate to the dotnet_mvc project folder. By default you have a HomeController.cs file inside the Controllers/ directory, and a Startup.cs file in the root which defines the server settings and url routes.

So proof of concept built, let’s see if it runs! Click the “Debug” icon on the left and press the little green arrow at the top of the screen. The project will be built and run, and VS Code will open your default browser to localhost:5000 where you will find a default web application.


Check back in next time when I add a new route, controller and support for a JSON REST API.

Starbound as a Service (SaaS)

Ever since the release of Starbound 1.0 I’ve dove back into it. I love the procedurally generated worlds, the platformer boss fights and, of course, playing with others.

Starbound comes with a Linux client AND a dedicated multiplayer server, so it’s basically a given that I would be hosting one. Now, you can run it through Steam, but that gets cumbersome if you want to play, too. So I used steamcmd and a custom systemd file so I can start and stop the server automatically, or with a really intuitive (for a debian or Ubuntu user) command:

$ sudo service starbound start|stop|status

How did I do it? Simple! [1]

Install steamcmd

I usually put custom installed stuff in my /opt directory, so I created a /opt/steamcmd directory and pulled down the steamcmd binary from the Steam CDN.

You can find all of these steps on the starbounder wiki site although it’s a little dated (it’s from before they had a 64bit linux client).

cd /opt
sudo mkdir steamcmd
sudo adduser steam
sudo chown steam:steam steamcmd
su - steam
cd steamcmd
tar -zxvf steamcmd_linux.tar.gz

By the end of this, you’ll have a useable steamcmd install, but with no games installed. So obviously, the next step is to

Install Starbound

In the terminal, run


Which will put you inside the steamcmd prompt steam>

force_install_dir starbound
app_update 211820

This bit will take a while since it has to download all of the files, so open a new terminal and

Add a systemd config file

in /etc/systemd/system create a file called starbound.service and put this information in it.




The two fields you might need to customize here are the WorkingDirectory and the ExeStart paths. Since I have everything installed in /opt/steamcmd/starbound these paths work for me.

Once the starbound steamcmd is installed, you can use

sudo service starbound start

to begin hosting a playable world! If you intend to host over a LAN or even on the internet, you’ll need to make a slight change to the starbound.config file in /opt/steamcmd/starbound/giraffe_storage directory.

Just change these three lines

"gameServerBind" : "::",
"queryServerBind" : "::",
"rconServerBind" : "::",

to your LAN IP. For instance, when I run

ip addr

It shows my IP as So my config file says

"gameServerBind" : "",
"queryServerBind" : "",
"rconServerBind" : "",

Happy exploring!


Git Clean

The majority of my programming projects are tracked using git. It’s decentralized, stable, compact and easy to configure, so it’s an obvious choice as a developer.

It allows me to split off my work into separate, tracked branches so I can work on multiple features or changes at the same time without risking the health of the codebase.

Once I’ve finished a feature in a branch, I can merge it into master and can happily use the new code. But the branch I worked in is still there. After several dozen mini “projects” like this, my git branch list starts getting very long.

Luckily, I found this short command line script to clean up branches that have been merged into master!

git branch --merged master | grep -v "\* master" | xargs -n 1 git branch -d  

This lists all branches that have been merged into master (including master), filters out the master branch, and then runs git branch -d for each branch name found this way.

But wait! If you work with a remote git repository, this still leaves all of those branches out there! Never fear, there’s a (pretty complicated) way to clean those up, too!

git branch -r --merged  master | grep origin | grep -v '>' | grep -v master | xargs -L1 | cut -d"/" -f2- | xargs git push origin --delete  

This lists all remote branches merged to master, finds all of the branches from origin (you can replace origin with your remote destination of choice), filters out branches you don’t want to delete, and then runs git push origin --delete for each branch found this way.

Tada! Sparkling clean git branches locally and remote.

Ubuntu 15.10 Quad Monitor xorg.conf configuration

At work I was tasked with setting up a NOC-esque monitoring wall, using an old desktop, two video cards and 4 monitors. I wiped the box and installed Ubuntu 15.10 on it, and immediately jumped into a world of hurt. First, here’s the final result.

$ cat /etc/X11/xorg.conf
# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 352.21  (buildd@lgw01-37)  Thu Jul 23 11:50:49 UTC 2015

# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 340.96  (buildmeister@swio-display-x86-rhel47-05)  Sun Nov  8 22:50:27 PST 2015

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    Screen      1  "Screen1" below "Screen0"
    Screen      2  "Screen2" below "Screen1"
    Screen      3  "Screen3" below "Screen2"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "1"
    Option         "StandbyTime" "0"
    Option         "SuspendTime" "0"
    Option         "OffTime" "0"    
    Option         "BlankTime" "0"

Section "Files"

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Philips 200P"
    HorizSync       30.0 - 97.0
    VertRefresh     56.0 - 85.0
    Option         "DPMS" "false"

Section "Monitor"
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Philips 200P"
    HorizSync       30.0 - 97.0
    VertRefresh     56.0 - 85.0
    Option         "DPMS" "false"

Section "Monitor"
    Identifier     "Monitor2"
    VendorName     "Unknown"
    ModelName      "Philips 200P"
    HorizSync       30.0 - 97.0
    VertRefresh     56.0 - 85.0
    Option         "DPMS" "false"

Section "Monitor"
    Identifier     "Monitor3"
    VendorName     "Unknown"
    ModelName      "Philips 200P"
    HorizSync       30.0 - 97.0
    VertRefresh     56.0 - 85.0
    Option         "DPMS" "false"

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Quadro NVS 290"
    BusID          "PCI:1:0:0"
    Screen         0

Section "Device"
    Identifier     "Device1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Quadro NVS 290"
    BusID          "PCI:1:0:0"
    Screen         1

Section "Device"
    Identifier     "Device2"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Quadro NVS 290"
    BusID          "PCI:2:0:0"
    Screen         0

Section "Device"
    Identifier     "Device3"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Quadro NVS 290"
    BusID          "PCI:2:0:0"
    Screen         1

Section "Screen"

    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "CRT-0"
    Option         "metamodes" "DVI-I-0: nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24

Section "Screen"

    Identifier     "Screen1"
    Device         "Device1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "CRT-1"
    Option         "metamodes" "DVI-I-1: nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24

Section "Screen"

    Identifier     "Screen2"
    Device         "Device2"
    Monitor        "Monitor2"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "CRT-0"
    Option         "metamodes" "DVI-I-0: nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24

Section "Screen"

    Identifier     "Screen3"
    Device         "Device3"
    Monitor        "Monitor3"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "CRT-1"
    Option         "metamodes" "DVI-I-1: nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24

So, that’s what I got. Exciting, right? I hit a bunch of snags while trying to get 4 monitors working, so I figured I would share my findings.

  • The open-source nouveau driver could not keep up with all 4 monitors, so I had to install the nvidia drivers. I installed the nvidia-352 package (apt-get install nvidia-352) and things seem to be working fine.
  • Unity doesn’t seem to play nice on the video card, so installing Gnome 3 was a necessity. (apt-get install gnome-desktop-environment)
  • The nvidia-settings tool and twinview proved difficult, so I used 4 xscreens and xinerama.
  • So, there’s not a lot of documentation on getting 4 monitors working in Xinerama in xorg.conf, most multi-monitor setups are 2, or at most 3. To support 4 monitors on 2 video cards, I had to duplicate the devices and give them unique “Screen” identifiers. BUT NOTE! Each device starts at Screen 0, it’s not a global setting.
  • On the flip side, in ServerLayout, the Screen fields are numbered from 0-3 when setting position. These don’t directly map to the previously defined Screen fields. (I was confused for a few hours on this point!)
  • Disabling the screensaver and the power settings/standby timer did nothing for stopping the screen timeout, so I had to configure the monitors to not report timeouts. Each Monitor has the DPMS option set to false, and the ServerLayout has a bunch of settings to disable all kinds of timeouts.

After a few days of trial-and-error (and ssh-ing in to reset xorg.conf) I was able to get a quad-monitor, always-on info center working in the office.