Sunday 21 October 2018

Messing with jq

JSON seems to be the goto text format these days.  I wasn't a massive fan beacuse I think YAML is nicer to read and XML I still think is great (think XPATH, XSTL etc) allbeit not very hipster

Anyway, tools are what tends to make a technology IMHO and JQ is excellent once you get do a bit of reading.  This tutorial I particularly liked - https://programminghistorian.org/en/lessons/json-and-jq

Here's a couple of examples that are useful.  I was playing with the Digital Ocean API

e.g.
curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/images?type=distribution" | jq '.images[] | {distribution: .distribution, id: .id, name: .name}'
This was a quick way for me to transform the disctributions output into something human readable curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/images?type=distribution" | jq '.images[] | select(.distribution == "CentOS") | {distribution: .distribution, id: .id, name: .name}'

This one just gets the slug names which I can set as a Terraform datasource curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/images?type=distribution" | jq '.images[] | .slug'

Sunday 3 September 2017

Forwarding logs with rsyslog

rsyslog never seems to get as much attention as it should now that everyone seems to use logstash.  I like rsyslog - it's always in the system repositories and it's been about for ages.

In this scenario I wanted to forward log files from a local file system verbatim to a remote host.  Here is what I came up with.  Both the configurations show the rsyslog.conf in it's entirity.  Adding TLS would also be easy and might be the subject of a future post. Centos 7 and the system rsyslog 7.4.7 is used

We use the tag name to set the target filename on the sever.  Notice how by using the dynaFile module we can set the filesystem layout.  In this example the where the tag name is set to icecast-error and is shipped from host relay3 on the server we end up with the file being created as
/var/log/relay3/icecast-error.log

Notice queue parameters are used to provide a local filesystem based queue.  This configuration will store messages in a queue and write the queue to disk should the system get rebooted.  This queue is useful for keeping messages locally when the remote logging server goes down or cannot be contacted.  I played with this quite a lot and it seems entirely robust.

Also note how in the server configuration that each line from the imported file is now wrapped in a new rsyslog item and we just want the message portion (our orginal shipped line) from it.

Client configuration
global(WorkDirectory="/var/spool/rsyslog")

ruleset(name="forward"){
    action(type="omfwd" target="logs.example.com"
    port="601" protocol="tcp"
    queue.type="LinkedList"
    queue.filename="srvrfwdQueue"
    queue.saveonshutdown="on" action.resumeRetryCount="-1")
}

input(type="imfile"
    file="/var/log/icecast/error.log"
    tag="icecast-error" ruleset="forward")

input(type="imfile"
    file="/var/log/icecast/access.log"
    tag="icecast-access" ruleset="forward")  


Server Configuration
$umask 0000 # Reset the umask so the CreateMode values work as expected
$MainMsgQueueSize 1000000
$MainMsgQueueDequeueBatchSize 1000

# http://www.rsyslog.com/doc/master/configuration/properties.html
template(name="directoryPerHost" type="string" string="/var/log/%source%/%syslogtag%.log")

# Strip the leading space induced by RFC3164, mmrm1stspace module can be used
# in >= 8.24.0
template(name="messageOnly" type="string" string="%msg:2:$:%\n")

ruleset(name="remoteInbound"){
    action(type="omfile"
        template="messageOnly"
        dynaFile="directoryPerHost"
        fileCreateMode="0644"
        dirCreateMode="0755")

input(type="imptcp" port="601" ruleset="remoteInbound")

Saturday 22 October 2016

Ansible - Getting a list of ip addresses for a specfic group

I had a very simple problem.  I needed to get a list of IP addresses from a group of slave servers  in inventory to use as part of a firewall ruleset task in a playbook for a master server (so access was only allowed to the master from the slaves).  I knew the ip addresses were accessible via hostvars but as I only run this playbook on the master (I don't use a site.yml type god playbook) I knew I needed to run something on the slaves first to gather the facts that build hostvars  Also I wasn't using a template so that ruled out using for loops in jinja2.

My approach to solving this was:
  • Create a playbook the runs on the slave servers but does nothing
  • Include this playbook in the master server playbook
  • Filter the hostvars data to produce a list consisting only of the ip4 addresses for a specific host group

Note: You need ansible >= 2.1 for this to work for the map extract feature

slave_do_nothing.yml
---
# Does nothing but will gather facts for later use
- hosts: slave
  tasks: [] 


master_server_test.yml
---
- include: slave_do_nothing.yml
- hosts: master
  tasks:
    - name: Do something with the slave ip address now in item
      set_fact:
         n00b: "{{ n00b| default([]) + [item] }}"
      with_items:
        "{{ groups['slave']|map('extract', hostvars,
        ['ansible_eth0', 'ipv4', 'address'])|list }}"

    - debug: msg="{{ n00b }}"


I've included something else I also discovered along the way which was how to create a list and append items to it.  You can see that fact n00b is created and defaults to a empty list and we add each item that is passed from with_items.  We then use debug to print it at the end (a list of ip addresses).  This should be very useful for debugging tasks in the future.

As a further example here is the task I use in a playbook for an icecast master server  to allow inbound connections from the icecast relays.

- name: Allow inbound from icecast relays
  firewalld:
    zone: public
    state: enabled
    permanent: true
    immediate: true
    rich_rule:
      "rule family=ipv4 source address={{ item }}
      port port={{ icecast_port }} protocol=tcp accept"
  with_items:
    "{{ groups['icecast-relay']|map('extract', hostvars,
    ['ansible_eth0', 'ipv4', 'address'])|list }}"

Monday 14 March 2016

More Xbox One Firewall Rules - Elite Dangerous

Elite Dangerous needs UDP 19364 outbound!

Friday 19 February 2016

SSH Don't offer a key, use a password

This is a problem i've been having more and more as my collection of keys and ~/.ssh.config grows. Sometimes you just need to log in with a password but ssh will try all of your keys and then the server prevents you from trying a password challenge because you've failed too many logins e.g.

Received disconnect from 10.12.66.18: 2: Too many authentication failures for ...

Just use ssh -o PubkeyAuthentication=no 

Wednesday 13 January 2016

Disable Anonymous Binds in IPA v3 (and enable them again)

I was not entirely happy with the documentation for this here: 
While correct, It gives me no idea how to check the current configuration or how to turn anonymous binds back on again, or how to test my changes

Here is my approach:

Check the current config with this ldap query (there may be room for optimising this)
ldapsearch -x -u -h ipa.server -b cn=config "(cn=config)" nsslapd-allow-anonymous-access -W -D "cn=Directory Manager" 

That should yield:
...
nsslapd-allow-anonymous-access: on


or
...
nsslapd-allow-anonymous-access: rootdse


I then created two simple ldifs to enable and disable anonymous binds

disable-anonymous-binds.ldif
# disable-anonymous-binds.ldif
dn: cn=config
changetype: modify
replace: nsslapd-allow-anonymous-access
nsslapd-allow-anonymous-access: rootdse


enable-anonymous-binds.ldif
# enable-anonymous-binds.ldif
dn: cn=config
changetype: modify
replace: nsslapd-allow-anonymous-access
nsslapd-allow-anonymous-access: on 


Either of which can be run with
ldapmodify -x -D "cn=Directory Manager" -W -h ipa.server enable|disable-anonymous-binds.ldif

Sunday 6 December 2015

Building Python from Source on OpenSUSE

My very first post on this blog was how to build Python from source.  In 2015 it turns out that this is abit harder that it should be, and having wasted most of the morning working it out I'm posting the solution for future reference.

Building Python in the usual way (in this case Python 3.5)

./configure --prefix=/usr/local/python3.5 
make 
sudo make altinstall 

Installs a Python that is broken. e.g.

/usr/local/python3.5/bin/python3.5 -c "import random" 
... 
ImportError: No module named 'math' 

It turns out that Suse then installs some modules into /usr/local/python3.5/lib64/ which Python does not include in it's sys.path

This is a bit of a hack but the easiest way to fix this (for me) is to

sudo ln -s /usr/local/python3.5/lib64/python3.5/lib-dynload /usr/local/python3.5/lib/

This issues is well known it seems - Python bug reports here and here