React is a relative newcomer to the ever-shifting sands of javascript framework development. After skimming the tutorial, I was intrigued enough to give it a shot by rewriting a page of a small application with it.
Root here, but most of the React code is in ui.jsx and StatusBox.jsx. The server code just implements a REST-like interface in Go language.
Health Monitor. The /detail/
page uses React. Compare it to the home/list page which uses handlebars for rendering.
TL,DR; I like it, much better than the typical disjointed javascript methods + templates pattern, and moreso than my limited experience with Backbone and Knockout. I’m itching to use React in my next project.
Most javascript binding frameworks face the same dilemma: Litter the view with logic or stitch together markup in the code. I’ve usually favored the former using tools like handlebars; React takes the latter approach, but not quite in the way you’d think.
Ever done this?
1 2 3 |
|
It’s hard to tell if that’s even well formed, let alone correct.
React uses a syntax that permits a combination of HTML and Javascript which Facebook calls HTMLavascript and is enabled by <script type="text/HTMLavascript">
.
If only. Actually, it’s called JSX and clever you probably wised up to the correct script type: text/jsx
. Anyway, here’s an example taken from the aforementioned project:
1 2 3 4 5 6 7 8 |
|
React can parse that into standard javascript on-the-fly during development or use a precompiled version for production.
Listed below are a few pros and cons I’ve gleaned along the way.
JSX. The inline HTML syntax looks weird at first, but it’s concise and clean. Plus, the parser is amazingly precise with its error messages, and the accompanying suggestions are descriptive and accurate. JSX is still 90% javascript so there’s almost no interoperability problems with other javascript libraries. Just be aware it likes to control its own DOM so make sure plugins don’t screw with it radically. I read somewhere mounting to <body>
can cause hard to find bugs as many plugins add and remove there liberally.
Client-side Model/state management. They say it’s the “V in MVC” but I find its real strength is in how it maintains a proper model. Typically I’d just map json to input boxes and pull them out at the right time. React enforces an internal state, and each change is reflected in that state. When the user is ready to commit a form, there’s no giant method extracting and converting data as the model is current in state.
Occupies the sweet-spot between functionality and shallow learning curve. It’s bite size enough to allow the developer to convert their code over in steps, testing it along the way, but carries enough features and structure to encourage the developer to create powerful and modular components.
Encapsulation. Each component encapsulates its own behavior and UI (and state if necessary, but less often than you’d think). For example, the markdown component manages its own toggle button behavior which controls whether to display rendered text or the input area. The components allow such good organization, I originally started with just an edit page + save button, but added in a view-detail step to all the components + edit and cancel actions in one big edit. After reviewing the code, fixing whatever syntax error messages came up (again, the parser is epic), the thing just worked. Not a huge deal in Java or Go, but Javascript??
Documentation is pretty darn good about walking you through a typical design scenario.
Lends itself to iterative refactoring. As I was developing, it was easy to see when a couple elements had outgrown their space and could be pulled into their own component or group.
Passing a function down the parent-child chain feels like a bucket brigade. Everyone gets their hands on it, only the last guy uses it.
External event interaction. I couldn’t find a way to fire React’s SyntheticEvent in a way that bubbles up to parents. The datetimepicker fires its own change event dp.change
; how do I notify an ancestor React element levels away to update the model?
These points are taken from my limited experience in this particular domain. Different scenarios may invalidate some of this, YMMV.
Use React.addon.update to update state. It makes it easier to add a new object to state without having to preserve it explicitly in every this.setState()
. The query syntax for partial updating is pretty neat.
I like the pattern of tacking extra data onto SyntheticEvent
to pass info back to a parent. e.g., TextInput.handleChange
Updating state kicks off a rerendering of all components under the parent. Here’s a typical flow:
Steps 2-4 incur a delay resulting in a brief flash of the 1st UI rendering, followed by a post-AJAX redraw. I use a PageBusy
flag in state here to suppress the first render until data comes back.
Being able to reap rewards early from small integrations of React into a project makes it an easier sell to a team of varied skill level and time commitment. You could cherry pick a single widget on your page and convert it over for a test drive. The component architecture delivers well on code reuse. I’m not one to jump on every new javascript library like it’s the new hotness, but this one definitely has my attention.
Normally.
A previous post looked at the Bullseye problem in Rust. I’m going to revisit it with Go language. My base machine will be a fairly respectable “CC2 Cluster Compute” EC2 instance:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
After porting the O(N) simplistic summing solution over from Rust and utilizing only one core, 1000 large test cases ran in 1280s. In the large set, there are a total of 6000 cases, 4000 of which involve large numbers, so roughly speaking, this solution would take 1 hour and 25 minutes. One of Go’s main features is its very simple parallelization via channels and goroutines. Goroutines let us fan-out our solver to multiple threads. Channels help us fan-in their responses to a single array in a threadsafe manner.
Here, we let the program use all logical processors. NumCPU
detected 32 in this case (perhaps the 16 cores are hyperthreaded):
1
|
|
Spawn one solver function per input. Note that the solver above does not return a value but writes it to the provided channel:
1 2 3 4 5 |
|
Returned values are collated into an answer slice and placed in order:
1 2 3 4 5 |
|
Executing this brought us to 83 seconds: a 15x improvment. Not bad. There’s a limit to how much scaling can be done on a single machine, so our next step is to use a cluster. If we divide the test cases out to 2 machines, theoretically, we should cut that time in half.
Data is grouped into 2 batches in this case. I’ve tried to distribute the load evenly among the nodes to ensure that no one gets the lucky easy half:
1 2 3 4 |
|
Kick off a sender for each batch and don’t wait for a response:
1 2 3 |
|
Batches are serialized and sent to the node. When the results come back, send them back over the Result channel one at a time:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The Node
Mode code. Listen for connection, deserialize the batch and spawn solvers. In Single
mode we extracted the answer into a []uint64
but we need to return this batch to the Master
node so we’ll keep it in a []*Result
and let it sort things out:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Executing this on 2 machines yielded 42s, very close to our theory. Since I’d gone this far, why not see what 8 machines would do? Here are the results from all 4 executions:
Enabling multithreading and 8-way parallelization brought us from 1280s to 12s, a 100x improvement; very good. Now that we’ve got our cluster up and running, here are execution times for various size loads:
And finally, all 6000 cases of the large set executed in 43.38s. This is lower than the 4096 cases tested because for those tests I was repeating a single case which was perhaps more complex on average than the cases in the large set.
Possible enhancements:
I didn’t find language in the Terms and Conditions that prohibited the use of clusters (of machines; clusters of humans are a no-no). However, Code Jam is clearly focused on elegant expression of a problem using code and on algorithmic efficiency. I’d guess that code executing on a single machine - multi-threaded or otherwise - falls within the spirit of the contest; the computing power of a single machine is fairly predictable. Once clusters are in play, there’s too much variability among contestants (e.g., access to server farm, finances to run EC2 cluster, using a stone-age language limited to single threading * cough * javascript). The playing field against an underprivileged coder becomes a little imbalanced. At any rate, the finals provide only a single machine with no internet, IIRC.
This was merely a POC to mess around with Go’s concurrency mechanisms, but for those of you who need just that extra edge, the code is up on github: clusterjam.go.
]]>Problem 1A in Code Jam 2013 involved finding the maximum number of rings that could be drawn when creating an archery target; the starting radius r
and paint supply t
vary.
I cranked out a naive implementation for the small problem set which looped through the formula for calculating paint used for each ring, summing it up along the way and comparing against the max paint supply. Since 1 mL of paint conveniently covers exactly pi cm2, pi was omitted. Given radius r
, the paint required for the k
th ring is found by:
This was also the first time I’d used Mozilla Research’s Rust, though I did gear up for the contest by preparing a vanilla template. As with any new language, the syntax isn’t completely cast in iron, and if you google around, you’ll see some examples with slight differences. All code samples are relevant for version 0.6:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Complexity is O(N) with N dependent upon t-r
. This took care of the small dataset fine. However, for a small radius and lots of paint - up to 10^18 - this could take long. Indeed, my first test with large-dataset-type numbers crawled. I immediately looked for the fastest solution I could think of, which involved arithmetic series:
computing total paint for j
rings
and solving for the positive root using the quadratic equation, given p
as paint used:
And in code:
1 2 3 4 5 6 7 8 9 |
|
No built in BigInt sqrt
, had to use Newton’s method:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
The good news was that the solution was constant time, O(1). The bad news was computation, debugging and BigInt wrangling ate up too much time so I failed to submit before the contest ended.
Turns out there’s a better approach here. Part of an earlier calculation yielded the formula for computing how much total paint would be used for j
number of rings. This could be used to approach the paint available t
in faster than linear time. In fact, a binary search did occur to me at one point. It’s orders of magnitude faster, but given how slowly the O(N) solution was executing, I figured O(log N) wouldn’t make much difference. Sucker ran nearly as fast as the O(1) solution:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Given the time constraints in Code Jam, sometimes it’s difficult to stop and guage what’s going to be fast enough. Then again, I should be able to adapt binary search into any solution fast enough by now, that a quick check shouldn’t cost much more than a few minutes.
The Rust templates for all the code above are on github: Rust Code Jam templates
]]>import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('www.google.com', 80))
s.send('GET / HTTP/1.1\r\n\r\n')
website = s.recv(1000)
self.response.write(website)
The more important point here is that libraries that rely on socket
can now be used. Indeed, the App Engine team has provided a demo that uses “nntplib”.
A question on Reddit spurred my curiousity and gave me a chance to try out Amazon EC2 for the first time. This is a small proof-of-concept that demonstrates accessing a remote Redis instance hosted on EC2 from App Engine.
This section is a dump of my notes from setting up EC2. Skip this if you already have a Redis server.
Navigate to the Amazon EC2 console Dashboard and click “Launch Instance”. Go with the “Classic Wizard” and choose a server. I used “Ubuntu Server 12.04.1 LTS, 64-bit”. For most of the way, I stuck to the defaults, so you’ll end up with a “T1 Micro (t1.micro)” instance type. Blow through the Launch Instances, Advance Instance Options, Storage Device Conf screens. Give a value to the Name
key tag if you wish; I skipped.
You’ll have the option to choose an existing Key-Pair if you’ve done this before. If not, Create a new one by giving it a name and click “Create & Download your Key Pair”. Keep track of that downloaded file; you won’t get another opportunity to download it. In fact, go ahead and copy it to the ~/.ssh folder of the machine you’ll be connecting from.
Next “Create a new Security Group” if you don’t already have one and give it a name and description.
In the “Create a new rule” dropdown, select SSH and click “Add Rule”, so you can admin the box.
For the next rule, keep it at “Custom TCP rule” and select a port range of 6379. Leave “Source” alone again, and “Add Rule”
Now “Continue” and “Launch”.
Note the name which looks something like “ec2-99-999-999-999.compute-1.amazonaws.com”
With your PEM file in the ~/.ssh folder[1]:
$ ssh -i ~/.ssh/ec2_redis_keypair.pem root@ec2-99-999-999-999.compute-1.amazonaws.com
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for '/home/xhroot/.ssh/ec2_redis_keypair.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /home/xhroot/.ssh/ec2_redis_keypair.pem'
Permission denied (publickey).
Ok:
$ chmod 400 ~/.ssh/ec2_redis_keypair.pem
$ ssh -i ~/.ssh/ec2_redis_keypair.pem root@ec2-99-999-999-999.compute-1.amazonaws.com
Please login as the user "ubuntu" rather than the user "root".
Whoops, ok:
$ ssh -i .ssh/ec2_redis_keypair.pem ubuntu@ec2-99-999-999-999.compute-1.amazonaws.com
Success! Now, install Redis:
$ sudo apt-get install redis-server
[2] Notice that it’s only listening for connections on the localhost port:
$ netstat -nlpt | grep 6379
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -
[3] Adjust the configuration file to permit remote connections:
$ sudo vim /etc/redis/redis.conf
Comment out the line:
#bind 127.0.0.1
It was on line 30 for me. Use 30Gi#
, hit ESC, ZZ
.
[4] Restart Redis:
$ sudo /etc/init.d/redis-server restart
Stopping redis-server: redis-server.
Starting redis-server: redis-server.
Check the port again:
$ netstat -nlpt | grep 6379
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN -
You can now connect from the outside. Connect to Redis locally:
$ redis-cli
redis 127.0.0.1:6379> set igor "IT WORKS!"
OK
I had a Redis client installed on a local Windows machine so I thought I’d mix it up by connecting from there.
Start -> Run -> c:\Program Files\redis\redis-cli -h ec2-99-999-999-999.compute-1.amazonaws.com
> get igor
"IT WORKS!"
Congratulations! You are the proud owner of a public, wide open, credential-free Redis instance. Fortunately, this article is merely a POC so we’ll be driving blindly past red flags for now.
Sample App Engine-Redis app: https://github.com/xhroot/appengine-redis
It’s not necessary to deploy this app to test it; the dev_appserver works fine. If you choose to deploy, note that “Sockets are available only for paid apps.” Create a new App Engine application and enable billing in the dashboard now to allow for the 15 minute delay before billing is active.
Clone the application above on your machine. In a separate folder, clone the Python Redis client:
git clone git://github.com/andymccurdy/redis-py.git
Copy the redis folder into the root of the App Engine application. Modify app.yaml
to match your application name:
application: yourappname
Modify main.py to use your EC2 instance name:
REDIS_HOST = 'ec2-99-999-999-999.compute-1.amazonaws.com'
Again, you can either run this on the local dev_appserver or you can deploy it. Once running, you can fetch values with GET
s. This method uses r.get()
to retrieve values from the remote Redis installation:
$ curl -w '\n' 'http://yourappname.appspot.com?igor'
igor="IT WORKS!"<br>
Use PUT
to add/update values. This method uses r.set()
:
$ curl -w '\n' 'http://yourappname.appspot.com' -X PUT -d 'proj=treadstone'
$ curl -w '\n' 'http://yourappname.appspot.com?igor&proj'
proj="treadstone"<br>igor="IT WORKS!"<br>
Use DELETE
- r.delete()
- to remove values:
$ curl -w '\n' 'http://yourappname.appspot.com?igor' -X DELETE
$ curl -w '\n' 'http://yourappname.appspot.com?igor&proj'
proj="treadstone"<br>igor="None"<br>
As mentioned earlier, port 6379 on the EC2 server is open to the public, which is not secure. There are some options:
That’s it. Perhaps the main driver for having an external caching layer as opposed to using memcache is to have greater control over data eviction. Another reason might be that a distributed cache may be suitable for synchronizing across platforms. It’s always nice to have choices and to see more capabilities being added to this platform. Looking forward to seeing what’s new in the I/O release.
[2] http://stackoverflow.com/q/14287176
]]>For a long running job, the browser would have to constantly pester the server for updates. With server push, the browser could initiate the job, then sit back and let the server respond at its leisure:
For Google App Engine developers this is implemented by the Channel API. There are also third party services like Pusher and Beaconpush that provide this capability through their API. And recently, thanks to David Fowler and Damian Edwards, .NET developers can also integrate this technology into their projects using SignalR.
I’ve cooked up a small demo that uses SignalR to demonstrate one solution to the problem of conflicting updates. When multiple users act on the same set of information, it’s easy for it to get out of sync. This demo uses SignalR’s Hub to push updates to the users the moment they occur so they can immediately act on the changes.
Our application is a student registry form. Users can update a student’s name, GPA and enrollment status. A server section, visible only for demo purposes, shows us the state of the database. Because the most recent data is always pushed to the browser, we can take immediate action instead of discovering a save conflict at the end of a long form.
Shown right is your typical N-tier MVC stack. Browser requests are handled by the controller, data is passed to the service for the real legwork, the DB is consulted as necessary and the data is passed back up the chain to the browser. Fantastic. Where SignalR comes into play is when the StudentService
decides a legitimate update has taken place. It then notifies the Hub that a new update has occurred and it should broadcast the new record to all listening browsers.
Let’s take a look at the setup.
1 2 3 4 5 6 |
|
The HubConnection establishes the base URL that the client side API will use to watch for updates. This is done in the controller and injected into the StudentService. Here’s the javascript setup:
1 2 3 4 |
|
This starts the hub on the client side. Once started, a unique id is assigned to this browser which comes in handy as we’ll see later. Next we look at the Hub,
1 2 3 4 5 6 7 8 |
|
and its corresponding javascript:
1 2 3 4 5 |
|
StudentHub
derives from the abstract Hub
. The hub only needs one method for this demo and that method sends out the updated data. It calls a javascript method called updateStudent
and sends it one argument - transport
. By the way, StatusTransport
is a wrapper around the data to be sent - StudentViewModel
- and is used to attach additional information about the request (e.g., operation status, error messages). In the javascript, the function name should match the method that is invoked from StudentHub
; result
is the javascript object representation of the transport
object sent by StudentHub
. When Clients.updateStudent
executes on the server, all connected browsers will execute this javascript function locally.
Recall the video demo above. When the student is saved, all other browsers immediately show a dialog notifying them that an update recently occurred with the option to accept the recent changes. Note also that the browser that issues the save will receive an updated record from both the Ajax save response and the StudentHub
. We use the ClientId
to check for the identity of the sender and ignore the duplicate update when the client id is self.
To contrast this behavior with the conventional approach, disable the hub by commenting out this line in StudentService
:
1
|
|
Now, concurrency violations will only detected at save time.
I’ve built this example as an MVC project, but Webforms could be used as well and I’ve included a WCF service as an example. To use it, in the Index.cshtml
view, change the serviceUrl
ternary to true
.
To wrap up,
The source for this demo is available on github: Registry.
]]>Koderank is an online whiteboard that provides an environment for quick and easy coding interviews. Interviewers can give candidates small coding exercises to gauge their abilities, and can view the code live as it is being typed. Voice chat is available through Twilio Client to allow the interviewer and candidate to converse during the session.
I originally created it for the Twilio Client launch contest and it was selected as one of five winning entries. The first iteration was done in Python/jQuery. As a learning exercise, I reimplemented it in Go language and Closure Library.
One of the fun things about using a language as new as Go is that a lot of third-party libraries haven’t been released yet, so you often get to be the pioneer, although it sometimes means a few arrows in your back. Here are a couple tools I had to create:
Sometimes it’s nice not to have to write everything yourself, though. Kudos to the developers of the third party libraries that I used such as Diff-Match-Patch, CodeMirror, and GoJWT. They’re credited on the About page. In the same spirit, I’m opening the source to the Koderank website which you can now find on github: Koderank source. It runs on App Engine and uses the Channel API for sending data to the listening browser. The server and client-side are written in Go and Closure Library, respectively.
I’ve written previously about my experiences with Go and Closure.
]]>:=
is used as declaration + assignment. var
and a type are necessary if there is no assignment (and thus no type to infer from).1 2 3 |
|
In Go?
1
|
|
More importantly, multiple variables can be returned from a function. This eliminates having to create transport objects whose only purpose is to wrap multiple values.
Here’s a list of some great resources for learning Go: Go Resources.
]]>rangeCheck
is a formidable accomplishment. Here it is in LOLCODE:
Example usage:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
I used C++ primarily and Go language in one case since I knew it had a handy string.Map
function. My solutions can be found on github.
One of the problems, named Kingdom Rush, provided a game scenario as input and required a result indicating the shortest number of turns to complete it.
On my 1st pass I implemented the obvious naive solution:
1 2 3 4 5 |
|
It passed the given samples, so I submitted and it failed. The grader doesn’t say why or give errors, so I had to find a case that broke the code. This is where pencil and paper are invaluable since at this point you have to simulate instances. Turns out, you can shorten your game by doing 2-star ratings where the 1-star rating is unlikely to be completed until late. I resubmitted with this modification and it passed.
What I learned:
Here’s the breakdown:
cmd /k
- This opens a command session, executes the command that follows, and keeps the window open. Useful if you’re entering commands in a Run dialog; unnecessary if you already have an active command prompt.
wevtutil
- Windows event log read utility.
qe
- Query events …
System
- … specifically from the system event logs.
/q:"..."
- XPath-like query used to traverse the XML.
/rd:true
- Direction, “true” shows latest first.
/c:1
- Number of results to return.
/f:text
- Format output as text, which is fairly readable in this case. Could also opt for “XML”.
This simply returns the last event with a source name of ‘Microsoft-Windows-Power-Troubleshooter’. Skim through the result to verify that it’s a return from sleep event. If not, increase the number of results to return using the /c
parameter.
This can also be accessed via the GUI using Start -> Run -> eventvwr -> Windows Logs -> System -> Find -> “Microsoft-Windows-Power-Troubleshooter”.
I’ve found this command useful in consulting situations where I need to track time for billing purposes.
]]>1
|
|
If the name is mistyped or renamed, the developer would usually only find out through runtime exceptions.
To solve this, I wrote a T4 snippet that creates a C# class of stored procedure names as string constants. This provides intellisense hints for the developer as well as compile time checking of stored procedure names.
The section at the top is for configuring project specific information (e.g., namespace, class name, database, connection string). There’s also a naming filter function where you can define how special characters get translated into a permissible C# variable name.
1 2 3 4 |
|
The template executes independent from the rest of the project, so the block that follows is responsible for setting up the environment to grab the connection string out of the Web.config. T4 does not generate automatically in VS 2010. After a stored procedure is added to the database, right-click on the template and select “Run custom tool” to regenerate the constants.
The source is availble on github: Stored Procedure Constants
]]>string.Format
, regex grouping or reflection expression.
The code in this post is a diversion from conventional best-practice. This is largely an annotated dump of some of my saved LINQPad queries, in some cases merely products of curiosity. Consider this a disclaimer.
When composing a LINQ query, I’ve often been impressed with how expressive it can be. For example, this returns a list (IEnumerable
, actually) of all prime numbers up to 100:
1
|
|
I started with nested for loops which reduced to two Range
s. There are even some prime sieves out there that are not much longer than this.
Seems like everyone’s got a fizzbuzz implementation lying around somewhere. Here’s my code golfed version:
1
|
|
Underscore.js brings much needed functional programming utilities to javascript. One example counts the word frequency in a song:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Compare it to a LINQ version:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Pretty close to 1:1, thanks to anonymous types. If anything, this shows you how versatile Underscore.js is. That aside, I think this is more LINQish:
1 2 3 4 5 |
|
The variable name “verse” was used to avoid confusion with the anonymous property “line”, which is never used.
In one project, I had to convert a string to a bool. Fiddling around in LINQPad, I arrived at this construct:
1 2 |
|
This one-liner evaluates any string stringBool
as true if it is “True” or “true” (or “tRue”, etc.), but false if anything else. What’s interesting is that it is declaring a boolean variable, referencing it as an out
parameter, reading it, and reassigning to it in one statement. As an exercise, how would you make one change to reverse the logic: every string is true except an explicit “false” or “False”?
Fortunately, I left none of this ambiguity in and at the end of the day, simply used:
1
|
|
How about a statement that chooses between two functions to execute using the conditional operator?
1
|
|
The Action
cast tells the compiler what kind of lambda is being used. The second expression can infer the same type and omits the cast. Since the expression will evaluate to a single Action
, it needs to be executed with the ()
. It’s possible to pass in a parameter:
1 2 3 4 5 6 |
|
If you’re wincing at this, can’t say I blame you.
This one might actually be useful. Every so often I need a unique identifier so I turn to C#’s Guid
class. After a while, I began thinking of ways to compress that 32 char guid down a tad and after some experimenting, came up with this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
When executed in LINQPad, you should see something like this:
1
|
|
32 characters can be represented in a 23 character, URL-friendly, globally unique string.
I encourage you to download LINQPad, play around, discover interesting little corners of C# and LINQ, and build your own snippets library. This was a long post so thanks for sticking around to the end, apologies to Matt Damon, but we’re out of time.
]]>Closure is equipped with a rich set of object/array processing (forEach, map, etc.) and DOM manipulation tools. Until recently, I had to include underscore.js along with jQuery to get the equivalent functionality.
Closure also has its own template system, a very robust XHR library which includes an XhrManager that can pool several XHR requests to save resources, a large UI library (browse the library demos to get a feel for the UI elements that are built in), the best javascript rich text editor I’ve seen to date, and several nice extras that jQuery would require plugins for, like the Url parser (goog.uri.Utils).
It’s also a very stable library. Closure is so deeply entrenched in many Google applications that they cannot make significant breaking changes at this point. It also means that new code is heavily vetted to ensure longevity. For example, to watch for events in Closure, use listen
or listenOnce
. How would you do it in jQuery? bind
, one
, click
, delegate
, live
, on
? Which of these is deprecated? How do their signatures differ? (On a side note, I really dislike jQuery’s “function overloading” which uses different types for arguments and determines intent by doing type checks).
The real boon, however, is in Closure’s ability to verify types and check for errors during compile. gjslint
and fixjsstyle
are useful pre-compile tools for error checking and automatically fixing formatting issues to ensure consistent readability.
Overall, I’ve been very impressed with the breadth of Closure Library. Its design and consistency across the codebase make it easy to write and maintain applications. It addresses a very particular problem in javascript extremely well - large codebases - but it cannot replace jQuery for small applications. Even still, my exposure to Closure has helped me to see how to improve my javascript in general and hopefully we’ll begin to see its ideas being incorporated into jQuery in the future.
See the API documentation to browse the library.
]]>Consider this familiar line:
1
|
|
#save
is the id of a DOM element of some sort, likely a button. If you click on it, saveModel()
will execute. This is the same as below:
1
|
|
jQuery’s bind watches #save
for click
events and fires saveModel
when it detects one. We could also force a click event by manually firing it with trigger:
1
|
|
Any behavior that would have initiated by actually clicking the button is kicked off by trigger
.
Less known is that bind
and trigger
can also be set on non-DOM objects:
The jungle
object keeps a running count of animals it encounters. bind
sets up listeners for the main animals in the jungle, ‘monkey’ and ‘bear’ (just go with it). When the document is ready, we fire off 4 events, 2 of which match our listeners, and addAnimal
executes accordingly.
Notice that event names are case sensitive, so ‘MONKEY’ did not trigger addAnimal
. Typos can be a real head scratcher since the browser doesn’t consider it illegal to be broadcasting ‘MONKEY’ sporadically. Google’s Closure Library assigns common events to constants for this reason, a practice I mimic in my own code. At runtime, if the property is misspelled, the browser should report an error to the console.
trigger
links the ‘jungle’ to event.target
when it fires so we have a reference to the original object within the addAnimal
function.
This example is somewhat contrived. But imagine that we had an application that used server push and Ajax calls, and that we wanted to encapsulate both features into a single class:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
All we care about regarding our data
instance is if a ‘NEW-ANIMAL’ arrives. We don’t care if it came in through a server push or from an Ajax ‘GET’.
The second parameter in trigger
is an array of parameters to pass to any listeners. By default, an event object will be the first parameter passed to bound functions. Subsequent function parameters will be the array parameters in order.