Lia.Skalkos

Life in code.

Introducing the JavaScript Module Pattern

Some time ago, in a job far far away (not that far away), I saw some JavaScript code that looked like this:

;(function($) { //etc… }(jQuery));

I wondered what this strange beast was and why anyone would write such code. It took me a little while to find out why, but now know and I thought I’d share it with the world. That reason is called the module pattern.

Why use the module pattern?

Imagine you are writing an application with lots of JavaScript. You could just write your JavaScript in one long stream, spaghetti-like fashion in a file, but there would be a few problems with this:

  1. You might pollute the Global Namespace, and there could be conflicts with other libraries or scripts your app makes use of
  2. You might have made some piece of code that shouldn't have been -- like a connection to an API -- publicly accessible with one simple function call to the console
  3. Your code will be disorganized and will get harder and harder to manage

Here is an example of a piece of code that doesn’t follow the module pattern, or any other pattern, that might produce the above problems. It’s just some vanilla JS.

In the above example, “foo” might conflict with another variable named foo in the global namespace. Also, anyone can call “foo” by typing it directly into the browser console.

Enter the module pattern.

How it Works

The module pattern is named thusly because it is a design pattern. Design patterns, per a quick Google search, are “solutions to software design problems you find again and again in real-world application development. Patterns are about reusable designs and interactions of objects.”

Here is the same piece of code from above in a very plain module pattern format:

Basic Module Pattern

Here we are wrapping the same code from before in an anonymous function expression that executes immediately. You create one of these by declaring an anonymous function, adding the parentheses on the end to signal execution, and wrapping the whole thing in a set of parentheses to tell the interpreter this is an expression to be evaluated. This is called an Immediately-Invoked Function Expression (IIFE).

This code creates its own local scope, but then immediately executes our code, giving us the same functionality as before, but protecting it from the global scope.

There are a few shorthand ways to write the IIFE that will save yourself a couple bytes and might look a little cleaner. You can just remove the wrapping parentheses and use an exclamation mark or plus sign at the beginning of your function.

A few variations on this simple pattern make it even more useful. In the below example, we store our module in a variable for potential use in other parts of our application. If you are familiar with Ruby, you can see how the code below resembles a Class.

Module Export Pattern

Again, we get the benefit of protecting our code from the global scope through the use of a closure (and allowing us to use any variable names we want locally), but we are able to reuse it through the DogModule variable across our application.

Module Import Pattern

Similarly, we can import a module and extend its functionality through the module import pattern. To do this, we pass the module along as an argument to a function that has a parameterdefined. Typically, the argument and and parameter are named differently to highlight that the module is going to be changed or extended somehow, but it’s not necessary. The module is stored in the local scope thanks to the parameter, giving a small performance boost.

Module Export Pattern w/Loose Augmentation

One downside with this pattern so far is that our entire module must be contained in one file. However, we can use the addition of the loose augmentation pattern to free us from this constraint. The way this pattern works is it imports your module (see module import pattern above) if it has already been defined and “augments” it, otherwise, if it is the first file to be imported, it uses a new object to define the module.

TLDR?

  • The module pattern makes sure your code doesn't interfere with the global scope.
  • It allows you to extend the functionality of other modules.
  • It's a powerful way of organizing your code.

Sources

http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html
http://toddmotto.com/mastering-the-module-pattern/
https://teamtreehouse.com/library/the-module-pattern-in-javascript-2

The Airport Runway Problem: A Look at Binary Trees and Big O

Fractal tree

Let's imagine a problem: you are in charge of building an airport runway scheduling system. Let's say that today you have 3 take-offs and 3 landings scheduled, but now another airline wants to see if they can schedule a take-off today as well. How could we handle this new request such that it isn't too big a deal?

Well, we could store our take-offs and landings in an array in order. But now we have this new request for a take-off that happens to be in the middle of our take-offs and landings. Without even checking to see if it would overlap with another flight, we could insert it into our array in an O(1), constant operation and then resort the entire thing. With a quicksort or a mergesort, this could be done in (an average of) O(nlogn) time, but this doesn't seem ideal.

Given our array is already in order, we could just skip doing a full resort and iterate through it until we find the right insertion point. This could be done in O(logn) time via a binary search (a divide and conquer strategy). But now (in the worst case) we might have to move over all the elements in our array every time we want to add in a new take-off or landing, meaning we are looking at an O(n) problem just for the insertion of the new data. Again, not ideal if you're trying to build a system that is scheduling many take-offs and landings a day, right? Enter binary trees.

Per wikipedia: a binary tree is "a tree data structure in which each node has at most two children, which are referred to as the left child and the right child." But binary trees also have a recursive definition, which is a mathematical (set theory) definition, which is that a binary tree consists of a set of three: a left, a right, and a singleton, where the left and right are also binary trees or a null set.

Let's say the example above represents our flight system in military time. We have flights scheduled at 8 am, 3am, 10am, and so on. Now let's say we want to add in a take-off an 9am. To do this we start at the first node and look at the value -- 8am. Because 9am is later than 8am, we go to the right. If we were booking an earlier take-off, we would go to the left. Because we have gone to the right, the next node is 10am. We ask the same question again: is 9am earlier or later than 10am? Because it's earlier, we go to the left and check if there is a node. No node, so we can go ahead and insert 9am as a new node to the left of 10am. Insertion in a binary tree is an O(1) problem, compared to the array, which was O(n). Phew, that's a lot better!

Notice that in the worst case, though, finding the correct insertion point could be O(n) -- even worse than a sorted array. This would happen if our binary tree happened to get filled in order, say, 1am, 2am, 3am, and so on, because we would have to go through all the elements in our tree before we get to the correct point to add in 9am. From this, we can observe that the amount of time it takes to find our new insertion point correlates to the height of the tree --- O(log2n).

The minimum height for a binary tree of n nodes is log2n. In the case of 1 million scheduled flights, to find the insertion point, we'd be looking at a difference of O(1 million) if the tree is in order and O(19) if the tree is "balanced". We can see that there would be a huge difference in effiency between an ordered binary tree -- essentially now a linked list -- and a so-called balanced binary tree. To get a balanced binary tree, we have to ensure that our data gets added in randomized order. If that isn't possible, then you have to perform certain operations at key times to keep the tree balanced.

If we want to spit out our schedule in order, in the case of both the array and the binary tree, we have to iterate through all the elements resulting in an O(n) operation in both cases. However, in the airport runway system, the binary tree wins over the array because it is much more efficient for inserting new values.

TLDR?

  • For sorted data that must be updated frequently, binary trees win out over arrays because they are more efficient at insertion.
  • But make sure the binary tree gets the data in random order (i.e. stays a balanced binary tree), because otherwise they become a linked list and looking up the correct insertion point becomes O(n).
  • Big O notation is actually not that scary.
  • Graphing Big O functions on https://www.desmos.com/calculator is super helpful to get an idea of how they perform.

Sources

https://en.wikipedia.org/wiki/Binary_tree
https://www.youtube.com/watch?v=9Jry5-82I68
https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree

Working With Scheduling and Time in a Rails Application

Part I

We just finished up project mode here at Flatiron. Something that seemed to be surprisingly complicated in our final project was working with time. To give a summary of our application, MedMed allows doctors to prescribe medication to a patient with a start time and an end time. The result of this prescription is a list of scheduled doses that appear when the patient logs in to the system. There were a couple challenges with this functionality: One, how to conceptualize a scheduling system generally, and two, considering that each patient could be in a different time zone, how to ensure the patient sees the correct time for their scheduled dose.

Thinking Through the Domain

For starters, we had to think through our domain. We envisioned a Prescription class having a start time and end time, but we needed to make the leap from that to showing a patient a list of scheduled medications they were supposed to take that day. This end result functionality felt inherently different from a Prescription, though it obviously felt closely tied to it. The tact we ended up taking was to envision our end goal -- a list of scheduled doses -- and to think about how we wanted our models to work based on that.

Given our goal, we realized it made sense to have a ScheduledDose model, each instance of which would represent a patient's scheduled dose. A ScheduledDose would have a scheduled time, a medication name, and would also be linked to a prescription and a patient. Thus, we needed to somehow derive our ScheduledDoses from a patient's Prescription.

Enter IceCube

Our initial instinct was that we needed some sort of Schedule model to represent recurring scheduled doses events. The problem of scheduling actually appeared to be a fairly complex one. One of the problems that Martin Fowler gets at in this article on scheduling is that as backend developers, we tend to think of our models in terms of properties, rather than behavior, and the idea of a Schedule is more about behavior than its properties. When you get right down to it, a Schedule really just encapsulates the idea of a recurring event. A doctor might want to schedule a medication to be taken daily, or twice a week, or perhaps every other week. Thinking about what we wanted our interface to function once again helped clarify our end goal. We realized that we wanted a doctor to be able to schedule a prescription much like you would schedule a Google Calendar event. After looking at the Google Calendar interface, which seemed to implement a set of rules, we suspected that scheduling was a problem that has been dealt with before in programming land. We therefore took to the internet where we discovered IceCube.

IceCube conceptualizes scheduling with two objects: the Schedule and the Rule. Schedules have rules which determine a schedule's occurences. To use an IceCube schedule, you add a recurrence rule to a schedule, and then get a list of occurrences back. This representation of scheduling felt pretty organic and we found we could add rules that corresponded very closely to the Google Calendar interface. Moreover, the list of occurences we could get back from the schedule would very nicely correspond to our list of patient ScheduledDoses. This neat little gem seemed to fit the bill perfectly.

To use IceCube within your application, you need to install the IceCube gem in your Gemfile, bundle, and include it as a module in you model. IceCube Schedule objects are saved as YAML via the Rails serialize macro, which is used to retrieve data as objects.

In our case, because a Prescription has a start time and an end time, we set the IceCube schedule automatically and wrap it within a schedule instance method. We then use the IceCube add_reccurence_rule methods to add rules to our schedules and create occurences. From there, we basically end up using as a Prescription model as a factory for ScheduledDose instances, using the schedule occurrences as the basis for ScheduledDoses.

Something that seems like a potential code smell is not having a separate Schedule model. It seems like this could be a separate responsibility from Prescription, and our Prescription model seems like it might be in danger of becoming a god object. But after giving it a little more thought, it seems like it might be okay for Prescription to own the schedule and be a factory for scheduled doses. Prescription doesn't seem likely to change a lot, since our application doesn't do much with it beyond generate scheduled doses. As such, scheduling seems to be an appropriate responsibility for prescription. This also aligns with the idea of prescription in real life too -- a prescription is really just a set of instructions for a repeated dose of medication.

How Programming Makes You a Better Person

People think that computer science is the art of geniuses but the actual reality is the opposite, just many people doing things that build on each other, like a wall of mini stones.

Donald Knuth
  • It's humbling
  • You get to work with really smart people. Also, if you haven't followed the traditional route of getting a computer science degree, you learn how to deal with facing higher barriers of entry to the field. You learn to deal with the fact that some people are just going to be more competitive candidates than you, and that all you can do is strive to be better.

  • It's also empowering
  • While there are differences in people's ability, you realize that you have more or less potential as anyone else to contribute positively to the world. Programming solves problems.

  • You begin to question things at a deeper level.
  • Programming is effective at eliminating assumptions about the way something works. You become better at evaluating your own false preconceptions, often of which, to your surprise, there are many. This way of thinking inevitably expands into other areas of your life. You become slower to jump to conclusions. You begin to question what you're told, and learn that you need to find out for yourself. As a result, you become more effective in your work and life, and you tend to trust yourself more. You have fewer opinions and tend to value evidence-based approaches. This begins to apply on a wider and wider scale.

  • It encourages you to be a team player
  • Sometimes you have to put aside your own thinking for the team, which ultimately is linked to the goal of the project itself. You discover that you can accomplish more through working with others.

  • It encourages you to be analytical
  • The more understanding you have of something, the easier is becomes to solve a problem related to it. You think about the reasoning behind a certain approach.

  • You can actually contribute to the body of human knowledge
  • While human advancement has often been portrayed as the lone genius having a stroke of insight, the reality is that progress is built on the groundwork of others. Other more academic fields such as physics or biology have a high bar of entry -- you have to have been studying them long enough to contribute to a niche area. With programming it's fairly easy to add to the common body of knowledge.

  • It rewards persistance, resourcefulness, and hard work
  • You often come across problems for which you can't find solutions on the internet and have to come up with it yourself.

  • It gives you a renewed appreciation for education
  • When you've had to slog through hours of debugging or learning to code on your own, you gain such an appreciation for someone telling you how to something. Then you suck the information so hard from their brain that they never have to tell you again, and now you are a 10X more efficient programmer. It makes you understand the value of education, which is to enable you to think critically and solve problems. As a consequence, it makes you want to help educate others.

  • It rewards curiosity and a love for learning
  • It's a privelege to be able to do something that is intellectually challenging, where you get to learn something new everyday.

TDD vs. BDD vs. Unit vs. Functional Testing and More

Here at FlatIron we are about to embark on a TDD, or Test-Driven Development 2-week project sprint. With that in mind, I set out to learn more about testing, which is one of the most important part of building any application, and which I intend to learn a lot about. As I learned from listening to RailsConf, testing is more than making sure your application works as you expect – it also functions as a consistent form of communication across a team and application – meaning another developer can come on board, get oriented, and confidently begin contributing to an application with the tests as a guide. More than that, it can help communicate from the business requirement level to the development level. Soon after embarking on my test research mission, I found myself thrust into a complex world with terms like BDD (Behavior Driven Development), unit testing, functional testing, feature testing, and more. Since my team and I intend to take testing very seriously in this sprint, I thought I’d take some time to explore each of these testing philosophies and see what they might contribute.

Testing Paradigms/Processes

TDD / Test-Driven Development

Test Driven Development refers to a process by which you develop and test your code. Under a TDD paradigm, your development process looks something like this:

  1. Before coding, you write a test
  2. Run the tests along with any other tests
  3. Write the minimum amount of code necessary to make the tests pass
  4. Refactor
  5. Repeat the process

TDD is not to be confused with unit testing or any other style of testing. Rather it refers to a process by which you can approach development. With TDD, you can get broad test coverage of your application. This gives you the confidence to make changes to your application as it grows, because you can see older tests passing and trust that your changes have not broken anything.

BDD / Behavior-Driven Development

Like TDD, Behavior Driven Development is a process by which you can approach development and testing. However, unlike TDD, BDD is more focused on how you test than when you test. With BDD, you focus your tests on behavior, rather than implementation, ideally starting from your customer's/user's expected experience. You picture your application as a black box (in other words, you try not to think about how your app reaches a result, but rather what the result should be first), from which the user gives an input, and they receive the expected result -- i.e. if I click on this button, I expect to see this result. This format leads to what is called outside-in testing, in which you begin with a high level test that begins with the user's expectation, and that test inevitably (and rather gracefully) drives you down to develop and test lower levels of your stack.

Perhaps the biggest contribution of BDD is that helps you avoid brittle testing and therefore a brittle design in your application. You can imagine that, if you start testing implementation rather than behavior, you may starts writing code that simply confirms the work you have already done. Starting with the highest level of functionality and letting that drive how you test and develop your application makes your code more flexible and resilient. For optimal development practice, BDD can and should be paired with TDD.

Testing Types

Unit Test

So what then is a unit test and how does it fit in to TDD/BDD? A unit test focuses on a single “unit of code”, such as a class method. Ideally the unit test in isolation from other dependencies, such as a connection to a database. All values should be stubbed or mocked, and the unit test runs solely in memory. That way you have a more abstract and thus more powerful low level test.

Integration Test

Per the name, an integration test builds on the unit test by testing multiple pieces of your application together. For example, testing an API interaction in your application. This type of test should find bugs that the unit test can't.

Acceptance/Functional Tests

Functional tests compare the result of a given input to a specification. These seem to be the most BDD-type of test, and they might be the best place to begin when designing your application.

The definition I read of acceptance test that I like best is "An acceptance test suite is basically an executable specification written in a domain specific language that describes the tests in the language used by the users of the system." Acceptance tests seem to be the test translation of the user specifications, and they test the functionality of the whole application at the highest level.

Obviously, there is no exact prescription for how to test an application. If you explore the interwebs, you'll find some surprisingly furious debate about the definition and purpose of all these different testing methodologies. But having even a general understanding of the different tools at your disposal should help you make better decisions for your application. All of these testing paradigms and types should be taken into consideration when building a robust test suite for your application that helps you design it better. If you're finding it difficult to write a test, take a moment and consider whether you're writing the right kind of to test. If you're starting at the unit test level, you may be starting too far down the stack. Figure out what your features are and work down. It might be difficult, but your application will be better as a result of it.

Sources:
https://robots.thoughtbot.com/testing-from-the-outsidein
https://www.youtube.com/watch?v=SOi_1reKn8M
http://stackoverflow.com/questions/4904096/whats-the-difference-between-unit-functional-acceptance-and-integration-test

How Javascript Is Interpreted

When you work with Javascript you learn that, unlike Ruby, a function in JS can “see” and alter variables declared outside its scope. This is because of how scoping works in Javascript. An inner function could overwrite a variable declared in the outside scope, or you could redclare a local variable to give it new meaning. Seems pretty straightforward. Then I read this piece of code:

Looking at this I thought, oh, well, the variable foo is defined in the main scope, so the if block inside the function bar() will never execute when bar() is called, since bar is able to read a from the main scope. Therefore the program writes 1 to the console.

Except no, it does not. It definitely writes 10 to the console. What the &%$#^?

Here is another JS mind twister.

Now 10 should definitely log to the console, right? Nope, it’s 1. So what is going on?

This is where it turns out to be important to have some idea of how Javascript’s interpreter works, and how scoping, variable definition, and function declaration/expression all come into play. It’s kind of nonintuitive and more intertwined than I realized.

The question comes down to this: how do names actually come into scope in Javascript? It turns out it’s through a well-defined order:

  1. Language-defined

    First, JS will apply language defined names to the scope. For example, all scopes within Javascript have the keywords this. Note that only functions create scope within Javascript. Thus, if you call a function within different functional contexts, the meaning of 'this' will change.

  2. Formal parameters

    Functions can have parameters formally defined on them. JS will read these next.

  3. Function declarations

    A function declaration looks like this:

    Declared(); //This will work as long as the function definition is declared in scope
    function Declared(){ //Some stuff here; }

    Declaration are "hoisted" to the top of the scope, so that you can effective call the function from anywhere within a function.

  4. Variable declarations*

    Note that these are the last thing to be defined by the interpreter, and the assignment is then *hoisted. However, the assignment still happens at the line of code where you declared it. This variable declaration rule also applies to function expressions (as opposed to function declarations, as in above).
    a(); //This will result in an error because the variable hasn't been assigned yet
    var a = function() { //Some stuff here. };

An overview of the order in which Javascript interprets code.

With our new found knowledge we can then explain how our gnarly original Javascript brain teasers are working:

Sources:

http://www.adequatelygood.com/JavaScript-Scoping-and-Hoisting.html

Minority Report:
Ruby Exceptions & Errors

Not too long ago in one of our labs at Flatiron, we were asked to write some code that made this test pass:

it “should raise an error for invalid types” do expect { RPSGame.new(:dynamite) }.to raise_error(RPSGame::PlayTypeError) end

After puzzling over this for a little, and reading some documentation, I got the test to pass by writing the following code:

class RPSGame attr_reader :type

class PlayTypeError < StandardError
end

def self.valid_types
  @@valid_types = [:rock, :paper, :scissors]
end

def initialize(type)
  @type = type
  raise(PlayTypeError.new) if !self.class.valid_types.include?(@type)
end

end

What was this neat little piece of code doing? Basically, raising an error if the user tried to enter an invalid action. I thought it was interesting, but soon moved on to other parts of the lab. However, I never forgot about the interesting little piece of code I had encountered. Ruby exceptions still feel inherently different to me than other parts of the language. They seem to be a little exotic, like rare birds in forest full of, I don’t know, carrier pigeons. I guess this feeling makes sense, since exceptions tend to only come into play when things in our program go wrong. Also, there are a couple of special methods associated with them that not every class of object has. So, with that little intro here are 7 things that are helpful in understanding exceptions:

  1. First and foremost, exceptions are Ruby objects.

    Like nearly everything in Ruby, exceptions are objects. This is great in that they can contain information about themselves, and can have a real presence within our programs. They can be passed around and instantiated whenever necessary.

  2. Exceptions are for exceptional circumstances.

    You probably don't want to use an exception in place of a well thought-out program and appropriate logic constraints. Say, in an application that already has built-in form/params logic, trying to guide the user towards a different input type. Instead, you probably want to make use of exceptions where it's reasonable that an unlikely but highly detrimental situation to your program could occur. It's nice here to spend some time thinking about what kind of tool exceptions are and where they would work best.

    Imagine we have an app that uses a 3rd party API. What happens if the connection times out, or the 3rd party's server is down, or for whatever other reason, their DNS isn't working? It's unlikely to happen, but because our application has this dependency, it's reasonable to have a contingency plan. This use case is a good example of where exceptions make a good fit.

  3. Exceptions can be raised.

    The raise method is from the Kernel module. By default, raise will create an exception of the RuntimeError class. To raise an exception of a specific class, you can pass in the class name as an argument.

    raise "Here is a regular RuntimeError"
    raise StandardError.new("Here is a StandardError")

  4. When an exception is raised, it halts the program.

    As a Ruby programmer, you encounter this on a day-to-day basis. Here's an example:
    "10" + 3
    TypeError: no implicit conversion of Fixnum into String
    There's no way to continue from here. Basically, this is where our program has ended up.

  5. Exceptions are associated with begin/rescue blocks.

    Begin/rescue blocks function a lot like if statements, in that if an exception is thrown in the "begin" part, you can rescue from it and essentially branch your program. Unlike an if statement, however, they can be a little more granular about what type of exception to rescue from.

  6. Exceptions have methods.

    This ties into #1. They basically are backtrace, message, and inspect.

  7. Make sure you specify the Exception type you are rescuing from.

    Probably the most important point for last. Errors inherit from the class Exception.


    As you can see, Exception has a big family tree. When you rescue from an Exception, you want to be as specific as possible about the type, so that you don't accidentally rescue from an exception you weren't anticipating.

    This wouldn't be fun, if say, you were trying to rescure from a connection error, and instead rescued from a NoMethodError. You have then unintentionally branched your program into the wrong type of logic.

Sources:
http://daniel.fone.net.nz/blog/2013/05/28/why-you-should-never-rescue-exception-in-ruby/

Efficient Coding With Rubymine:
a Quick Guide

Programmers tend to be a crew that likes their tools. From the OS, to version control, to bash and Ruby itself, one could argue that pretty much every aspect of programming involves a tool. Usually, the linchpin of all these tools is the text editor, the place where the code actually gets written. Recently, instead of using Sublime Text, I’ve been using an IDE called Rubymine. While I’m a fan of Sublime Text, I’ve been finding that Rubymine provides some advantages that give me a boost in efficiency. In this post I give a quick review of some of Rubymine’s features.

Debugging

Rubymine comes with a built in visual debugger that can function as an alternative to debugging with pry. Let’s take a look at how this works.

Navigate over to the rspec test you want to run (or the file that serves as your program’s entry point), right click, and select debug (or Option + Shift + D). You can also select a specific test to run within a spec. If there are no bugs, your test will pass and your program will run. When you debug, three tabs will open up at the bottom: the Debugger, Console, and Interactive Console. The Console reports the output of your test, and will alert you if there are any errors. Let’s say you run a test and you hit an error. Here are the steps you can take to take advantage of Rubymine’s debugging capabilities:

  1. Head over to the line in your program where Ruby ran into a problem.

  2. Set a “breakpoint” – this is very similar to setting a binding.pry. The next time your program runs Rubymine will stop it at this point. Use Command + fn + F8 to set a breakpoint. You can set several of these at a time.

  3. Rerun your debugger (Option + Shift + D).

  4. Now, when you check the debugger tab you’ll be shown the current values of the local variables within the scope of your breakpoint and Rubymine will write out these values in your actual code. You can also get access to an Expression Evaluator and can set a watch or a condition on variables or expresssions to see how they change as you move through your code.

  5. Head over to the Interactive Console for an IRB that gives you access to these variables.

  6. Once you’ve corrected your code, hit ‘Rerun’ to rerun your test.

This might not sound like it’s that different from using pry, but when you’re debugging, getting visual cues and having all of the information in one place without having to run multiple commands and switch between panels seems invaluable.

Syntax Highlighting & Auto Suggestion

Rubymine remembers your classes and their methods no matter where you are in your program, so when you reference a class or create a new instance it provides a list of method suggestions. Pretty helpful when you’re trying to remember a method. Also, unlike Sublime Text, it provides code completion and signals you when you have a syntax error or have misused a local variable. No more having to type “end” to a if block.

Terminal

You can run bash directly from Rubymine. Need I say more? No more need to shift + tab back and forth from terminal.

The Verdict?

Rubymine seems to be super powerful. It has other features which I haven’t begun to explore, like other language and refactoring support. It’s keyboard-centric, which means it’s meant to be handled through keyboard commands as much as possible. Of course, having to learn a bunch of keyboard commands can be time-consuming and an extra stress when you’re already trying to learn something new. But, once you have a good number down, they save you a lot of time. Rubymine definitely takes more time to learn than Sublime Text, and can be a learning project all on its own. There are a ton of features, which can seem overwhelming and like a distraction from the actual task: coding. Also, at $99 for an individual license, it’s not cheap. But an upfront investment of time and money seems very likely to payoff in terms of efficiency and support.

Some purists might argue that they like sparse tools like Vim better because they can code right from the command line. In my view, why not use a tool that can provide you more context immediately? When you’re doing something as cognitively involved as coding, it helps to have all the signaling you can get.

In the end, there is no “best” text editor or environment set up – it’s a matter of individual taste and circumstance and what you feel supports you best in your workflow. As long as a tool is empowering you, it’s worthwhile. I’m still getting to know Rubymine, but for now, I’m enjoying it.

Zen and the Art of Programming

Practice makes the master. -Spanish saying

Some time ago, I was applying for a developer job at a startup. Part of the application process involved solving a code challenge that required you to programmatically retrieve and manipulate a set of data through an API. I chose to do the challenge in Node.js, because I like Node (and Javascript-y things in general) and because Node was a key part of the company’s product and of the job I was applying for.

I remember firing up Sublime, writing some preliminary code, and retrieving the first set of data. I felt great. I felt like, “I got this”. In fact, I had a lot of big feelings about this one code challenge. Up until that point, I was largely a self-taught developer. I thought, how amazing would it be if I got this job, having learned to code mostly on my own. Like, it would be mean I Am Really Smart after all and Totally A Real Developer and all these other big things that would validate me and free me from my insecurities. Then I got deeper into the problem.

Long story short, I hit a nested tree-like structure part of the problem. I tried recursion, but that seemed to be a friggin nightmare. Then a colleague suggested I try a queue instead of recursion. That seemed to be better, but I was still in callback hell thanks to Javascript’s lovely asynchronous nature.

The details of the problem are largely unimportant. What is important is that I spent a weekend on the problem. Actually, more than a weekend on it. From the time I received the challenge, I spent literally every waking second on it, well into the week, well into when I should not have been thinking about it anymore, like when I should have been eating or taking bathroom breaks or sleeping. I was totally out of my element, and that was incredibly hard to accept. For most of that week, I couldn’t even understand what my problem was, let alone how to fix it. It took me half the week alone to learn about tree structures – a way for me to conceptualize the problem – and even more days for me to learn about callback hell, and about Javascript’s asynchronous nature.

The important thing about the problem was not that I solved it, but that I couldn’t solve it, no matter how hard I tried, no matter how much I wanted it. It was torture. Just like I had felt solving the problem would say something great about me, not solving the problem seemed to say something awful about me.

So much in life is actually pretty easy. You show up, you do the work, you get the credit. Actually, most things in life are pretty effortless. And so even when we know we have to put in the time to learn something, or to solve something, we still expect that process to be effortless. Like a savings account, we expect to put some time in and get a reliable return.

But sometimes things are beyond our control. Sometimes you can’t solve that problem, at least, not right now, perhaps in this place, not in this way, or with this tool. I couldn’t solve the problem on my own through the sheer force of my intellect. I researched it, wrestled with it, and yet I still didn’t “win” – though what exactly I was trying to win remains a mystery.

Here is the paradox: we can’t fully believe in our own mastery if it has been effortless. Problems in programming, as in life, require patience and faith – the faith that they will be resolved in time, though perhaps not in the way you would like, and the patience to struggle for awhile, though it may be uncomfortable for much longer than you anticipate. It is here where I like the Zen concept of beginner mind. No matter how experienced we are, if we always think of ourselves as a beginner, we can approach a problem with humility, openness, and a certain amount of self-gentleness. We can accept our frustration simply as evidence that we care deeply about our craft, and that we want it to be worth the struggle. In the words of one of my former coworkers, “All is practice.” If we are always practicing, then we are always learning. If we are always learning, then we have never fully arrived, and that’s okay, because that was never really the point anyway. Mastery is not a binary state but a gradual unfolding, revealing itself to you when you least expect it.

The truth was this: my experience on this particular problem didn’t say anything about me beyond the fact that I struggled with it for a while. It was just a moment in time.

In the end, I did end up submitting a beautiful little program with the help of my colleague and Promises.js. It worked, and while I didn’t end up working for the company, I gained so much more by pushing myself farther beyond where I possibly thought I could go.