Build your own Minecraft server in EC2 and a Raspberry Pi with Ansible

After a time working with Chef Server and taking advantage of a change of job where you had to start automating deployments and configuration management from scratch, I decided to work with Ansible by the simplest (no central servers), clean (Bye bye Ruby gems) and direct architecture that presents and its proximity to knowledge systems administrator (ssh, python, yaml …)

ansible-logo

And recently I wanted to use this knowledge to make a small personal project that gathered one of my hobbies, video games, with this newly acquired knowledge of Ansible.

jugar minecraftSo consequently I developed a playbook (available in https://github.com/juanviz/ansible) that installs a Minecraft game server, third best-selling of all history today and currently selling 10,000 copies daily, in a public EC2 instance and a Raspberry Pi 1 model B in local (although nothing prevents it accessible via the Internet as well as the EC2 instance) so that any user can play Minecraft .

Videogame architecture is client / server and JAVA code in both cases.

In the case of EC2 instance it has been able to use the official version of the server, https://mcversions.net/.

 

30f249dIn the case of the Raspberry PI due to the  hardware limitations I used a modified version, Spigot, just that I have compiled according to the instructions on this page and the settings included in the playbook (plugin NoSpawnChunks) included works swimmingly ( with a limit of 5 simultaneous players) using 364 MB of RAM.

maxresdefaultIn the readme.md you have the basic instructions to execute the playbook, but we really only have to change the inventory to set the ip / url addresses of your EC2 instance and your Raspsberry Pi.

Recommended to add the following configuration ssh (edit $ HOME / .ssh / config) on the machine where the workstation for ease of use (explained in the readme) is executed:

host urldetuinstanciaec2
user ubuntu
IdentityFile ~ / .ssh / tukeypairprivado.pem

The playbook installed java 1.8 and Oracle scripts start / stop.

Currently the playbook only provides OS Ubuntu / Debian but will be updated for other operating systems.

To verify that everything works correctly you need register in minecraft.net and buy your license (19.95 euros) and follow the simple instructions explained in this video.

The whole process of implementing the playbook, and boot server connection using the Minecraft client is explained in the following video


Enjoy the game!

Documentation consulted:

minecraft.net
ansible.com
raspberry Pi
https: //www.spigotmc.org/wiki/spigot-installation/#linux

http: //minecraft.gamepedia.com/Pi_Edition

https: //www.raspberrypi.org/learning/getting-started-with-minecraft-pi/worksheet/

https: //pimylifeup.com/raspberry-pi-minecraft-server/

EC2
https: //qwiklabs.com/focuses/2628 locale = en

Ebook recommendation of the month – Proxmox HA

This book, Proxmox High Availability, is recommended for people that have expertise working with Proxmox (If it isn’t your case I strongly recommend read first this book).

You can find a detailed explanation about how to getting started to build a Proxmox HA environment with Proxmox VE Cluster, a useful explanation about how to migrate an existing system to a VE Cluster, disaster recovery of the cluster and a last section about troubleshooting.

It is a well written book with practical and real examples and definitely worth a shot.

0888OS_ Proxmox High Availability_0.jpg

IT books recommended of the month

I just want to recommend you some of the latest and better IT books (DRM-free available at packt) that I have read recently:

The first is Mastering Proxmo(Wasim Ahmed). Proxmox is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

The book covers the basic topics with good examples and I only miss more information about ZFS.

But it worths the cost because is a perfect guide for beginners and advanced users that explains you all of the topics that you need for have a complete Proxmox cluster and show a lot of real scenarios with different architectures at the end of the book.

When you read this book you are able to manage a complete Proxmox cluster with Ceph storage cluster.

The second recommendation for all of you that are interested in DevOps culture and automation tools is Configuration management with Chef Solo  (Naveed ur Rahman) that has been reviewed for my colleague Jorge Moratilla and covers an easy and useful way to manage your platform without client-server architecture.

Ideal for people with basic ruby and sysadmin knowledge to progress with automation tools if you know the basic topics about thata dn you have some experience with this kind of tools.

 

CloudFormation and VPC real examples Part I

AWS CloudFormation is a very useful DevOps tool for deploy and update a template and its associated collection of resources (called a stack) . It is very useful to create your complete AWS infrastructure in some regions (remember the disaster recovery plan!), with a control version management (it is json code!) and create,update and destroy the stack in a confidence way.

I recommend you to read this presentation that is a complete and very good briefing about AWS CloudFormation features:

I recommend you strongly to check the templates available at http://aws.amazon.com/cloudformation/aws-cloudformation-templates/

This is a real script (Source code available at https://github.com/juanviz/AwsScripts/tree/master/cloudformation) for create the infrastructure of an environment  that is composed by:

  •  NATSecurityGroup
  •  PrivateSubnet
  •  PublicSubnet
  •  LoadBalancerSecurityGroup
  •  PrivateSubnetRouteTableAssociation
  •  PrivateSubnetNetworkAclAssociation
  •  PublicSubnetRouteTableAssociation
  •  PublicSubnetNetworkAclAssociation
  •  NATDevice
  •  ElasticLoadBalancer
  •  PrivateRoute
  •  NATIPAddress
  •  ElasticLoadBalancerBackend

The script can be executed from AWS web interface, https://console.aws.amazon.com/cloudformation, or with CFN tools from shell, http://aws.amazon.com/developertools/2555753788650372.

Note: the script assumes that you have already created a VPC with its route tables and acls. A second post will explain the file with the creation of the whole infrastructure.

{
 "AWSTemplateFormatVersion" : "2013-07-10",
 "Description" : "Template which creates a public and private subnet for a new environment with its required resources included NAT instance and Load Balancer with AppCookieStickinessPolicy",

#### Parameters block defines the inputs that the operator has to fill with the right values for the environments resources that we are going to create
#### For example the keypair of the NAT instance.

"Parameters" : {
 "InstanceType" : {
 "Description" : "NAT instance type",
 "Type" : "String",
 "Default" : "m1.small",
 "AllowedValues" : [ "t1.micro","m1.small","m1.medium","m1.large","m1.xlarge","m2.xlarge","m2.2xlarge","m2.4xlarge","m3.xlarge","m3.2xlarge","c1.medium","c1.xlarge","cc1.4xlarge","cc2.8xlarge","cg1.4xlarge"],
 "ConstraintDescription" : "must be a valid EC2 instance type."
 },
 "WebServerPort" : {
 "Description" : "TCP/IP port of the balanced web server",
 "Type" : "String",
 "Default" : "80"
 },
 "AppCookieName" : {
 "Description" : "Name of the cookie for ELB AppCookieStickinessPolicy",
 "Type" : "String",
 "Default" : "Juanvi"
 },
 "KeyName" : {
 "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the NAT instance",
 "Type" : "String"
 }
 },

#### Data used in the creation of the NAT instance. The arch (32 or 64 bits) depends on the size chosen for the instance
 "Mappings" : {
 "AWSInstanceType2Arch" : {
 "t1.micro" : { "Arch" : "32" },
 "m1.small" : { "Arch" : "64" },
 "m1.medium" : { "Arch" : "64" },
 "m1.large" : { "Arch" : "64" },
 "m1.xlarge" : { "Arch" : "64" },
 "m2.xlarge" : { "Arch" : "64" },
 "m2.2xlarge" : { "Arch" : "64" },
 "m2.4xlarge" : { "Arch" : "64" },
 "m3.xlarge" : { "Arch" : "64" },
 "m3.2xlarge" : { "Arch" : "64" },
 "c1.medium" : { "Arch" : "64" },
 "c1.xlarge" : { "Arch" : "64" }
 },
#### Data used in the creation of the NAT instance. The AMI ID depends on the region chosen for the instance

 "AWSRegionArch2AMI" : {
 "eu-west-1" : { "32" : "ami-1de2d969", "64" : "ami-1de2d969", "64HVM" : "NOT_YET_SUPPORTED" }
 }
 },
#### Here begins the definition of the resources that we want to create
 "Resources" : {
#### Creation of the public subnet
 "PublicSubnet" : {
 "Type" : "AWS::EC2::Subnet",
 "Properties" : {
 "VpcId" : "yourvpcid",
 "CidrBlock" : "172.20.1.32/27",
 "Tags" : [
 {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} },
 {"Key" : "Network", "Value" : "Public" }
 ]
 }
 },
#### Assignation of the existing route table created manually to public subnet
 "PublicSubnetRouteTableAssociation" : {
 "Type" : "AWS::EC2::SubnetRouteTableAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "RouteTableId" : "rtb-xxxxx"
 }
 },
#### Assignation of the existing ACL table created manually to public subnet

 "PublicSubnetNetworkAclAssociation" : {
 "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "NetworkAclId" : "acl-xxxxxx"
 }
 },
#### Creation of the private subnet
 "PrivateSubnet" : {
 "Type" : "AWS::EC2::Subnet",
 "Properties" : {
 "VpcId" : "vpc-a31de1c8",
 "CidrBlock" : "172.20.65.0/24",
 "Tags" : [
 {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} },
 {"Key" : "Network", "Value" : "Private" }
 ]
 }
 },
#### Assignation of the existing route table created manually to private subnet

 "PrivateSubnetRouteTableAssociation" : {
 "Type" : "AWS::EC2::SubnetRouteTableAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PrivateSubnet" },
 "RouteTableId" : "rtb-baa892d1"
 }
 },
#### Assignation of the existing ACL table created manually to private subnet

 "PrivateSubnetNetworkAclAssociation" : {
 "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PrivateSubnet" },
 "NetworkAclId" : "acl-xxxxxxx"
 }
 },
#### Added new route in private route table for send all of the public traffic to internet through the recent created NAT instance
 "PrivateRoute" : {
 "Type" : "AWS::EC2::Route",
 "Properties" : {
 "RouteTableId" : "rtb-xxxxx",
 "DestinationCidrBlock" : "0.0.0.0/0",
 "InstanceId" : { "Ref" : "NATDevice" }
 }
 },
##### Creation of the ELB for balance traffic to front web servers, created in the public subnet previously created
 "ElasticLoadBalancer" : {

 "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
 "Properties" : {
 "SecurityGroups" : [ { "Ref" : "LoadBalancerSecurityGroup" } ],
 "Subnets" : [ { "Ref" : "PublicSubnet" } ],
 "AppCookieStickinessPolicy" : [{
 "PolicyName" : "BooksLBPolicy",
 "CookieName" : { "Ref" : "AppCookieName" }
 } ],
 "Listeners" : [ {
 "LoadBalancerPort" : "80",
 "InstancePort" : { "Ref" : "WebServerPort" },
 "Protocol" : "HTTP",
 "PolicyNames" : [ "BooksLBPolicy" ]
 } ],
 "HealthCheck" : {
 "Target" : { "Fn::Join" : [ "", ["HTTP:", { "Ref" : "WebServerPort" }, "/"]]},
 "HealthyThreshold" : "3",
 "UnhealthyThreshold" : "5",
 "Interval" : "30",
 "Timeout" : "5"
 }
 }
 },
##### Creation of the ELB for balance traffic to front web servers, created in the private subnet previously created

 "ElasticLoadBalancerBackend" : {

 "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
 "Properties" : {
 "SecurityGroups" : [ { "Ref" : "LoadBalancerSecurityGroup" } ],
 "Subnets" : [ { "Ref" : "PrivateSubnet" } ],
 "AppCookieStickinessPolicy" : [{
 "PolicyName" : "BooksLBPolicy",
 "CookieName" : { "Ref" : "AppCookieName" }
 } ],
 "Listeners" : [ {
 "LoadBalancerPort" : "80",
 "InstancePort" : { "Ref" : "WebServerPort" },
 "Protocol" : "HTTP",
 "PolicyNames" : [ "BooksLBPolicy" ]
 } ],
 "HealthCheck" : {
 "Target" : { "Fn::Join" : [ "", ["HTTP:", { "Ref" : "WebServerPort" }, "/"]]},
 "HealthyThreshold" : "3",
 "UnhealthyThreshold" : "5",
 "Interval" : "30",
 "Timeout" : "5"
 }
 }
 },
#### Elastic ip attached to previously created NAT instance
 "NATIPAddress" : {
 "Type" : "AWS::EC2::EIP",
 "Properties" : {
 "Domain" : "vpc",
 "InstanceId" : { "Ref" : "NATDevice" }
 }
 },
#### Creation of the NAT instance
 "NATDevice" : {
 "Type" : "AWS::EC2::Instance",
 "Properties" : {
 "InstanceType" : { "Ref" : "InstanceType" },
 "KeyName" : { "Ref" : "KeyName" },
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "SourceDestCheck" : "false",
 "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
 { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
 "Tags" : [
 {"Key" : "Name", "Value" : "nat.public-staging-books" }
 ],
 "SecurityGroupIds" : [{ "Ref" : "NATSecurityGroup" }]
 }
 },
#### Creation of the security group for NAT instance previously created
 "NATSecurityGroup" : {
 "Type" : "AWS::EC2::SecurityGroup",
 "Properties" : {
 "GroupDescription" : "Enable internal access to the staging NAT device",
 "VpcId" : "vpc-a31de1c8",
 "SecurityGroupIngress" : [
 { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"} ],
 "SecurityGroupEgress" : [
 { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "udp", "FromPort" : "53", "ToPort" : "53", "CidrIp" : "0.0.0.0/0"} ]
 }
 },
#### Creation of the security group for ELB previously created
 "LoadBalancerSecurityGroup" : {
 "Type" : "AWS::EC2::SecurityGroup",
 "Properties" : {
 "GroupDescription" : "Enable HTTP access on port 80 and 443",
 "VpcId" : "vpc-a31de1c8",
 "SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0" } ],
 "SecurityGroupEgress" : [ { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ],
 "SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0" } ],
 "SecurityGroupEgress" : [ { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ]
 }
 }
 },
#### Outputs parameters as result of the stack's execution that we want to save as stack's information
 "Outputs" : {
 "URL" : {
 "Description" : "URL of the website",
 "Value" : { "Fn::Join" : [ "", [ "http://", { "Fn::GetAtt" : [ "ElasticLoadBalancer", "DNSName" ]}]]}
 }
 }
 }

Highlights about Barcelona DevOps Days 2013

I have been in Barcelona DevOps Days 10th and 11st of October and I want to share some thoughts about the topics that we have discussed there.

My favourite presentation was “The DevOps Pay Raise: Quantifying Your Value to Move Up the Ladder”  by Tom Levey, an amazing speaker.

In this presentation he explains how to show the benefits to DevOps culture inside the company in a understandable way for business people (money). I keep in my mind some kpis that he showed us to correlate between them and show how a bad performance of a site  (that is a in the most of the cases consequence of a bad management of development, deployment, test and release process)  can impact in your benefits and how in the 95% of the cases for example a bad time response because errors from a bad release causes lower revenues from the site that means less benefits.

Another useful and interesting presentation was “Devops road at Tuenti” by Oscar San José and Victor García about how they have developed and implemented some tools that are driving DevOps culture inside the company, making easy for developers to deploy and test their code, providing information about all stages and ensuring higher code quality by automating testing and other tedious, risky tasks, such as code versioning, tagging, build and distribution to app markets. Also helps Product Owners to track the lifecycle of a feature and everything can be tracked from the same place by everyone in the company.

For people that are working in a non-IT company and wants to introduce the DevOps culture I recommend strongly the presentation “Even a classical retail company can go DevOps… With success”  by Sylvain Loubradou that explains the long and difficult way done in a classical and quite normal IT department in a classical company towards the DevOps philosophy.

And finally I have to recommend “The Enemies of Continuous Delivery” by  Isa Vilacides that showed us the keys to achieve a successful continuous delivery process with her very useful QA point of view that DevOps culture needs.

I want to stand out something that was very interesting but bad organised; the open spaces, where we have discussed about some hot topics as configuration management and SQL updates that in some cases are very difficult to automate in a reliable way.

About ignite talks I can tell you that it is a real challenge for speakers because they only had 5 minutes to explain 20 slides with 15 seconds per slide. Take a look to “Poka­yoke and DevOps” by Ulf Månsson.

I can tell you that these kind of events are a must for exchange knowledge and discover some things that can improve a lot our day-to-day work and the performance of our business.

Finally I want to thank a lot all of the effort made by Rhommel Lamas and all of the sponsors (http://www.strsistemas.comhttp://www.moritz.comhttp://www.rakuten.co.jphttp://www.ca.com/es/lpg/appvelocity/home.aspxhttp://www.tid.eshttp://www.3scale.net, http://www.github.com) by make possible this event to Spain for the first time.

See you in the next DevOps days!