spindas: (Default)

Another week gone, another Sunday, and here I am sitting down to write about more drawing practice.

Today I worked through the next section of Drawing on the Right Side of the Brain. The first pages of this chapter greeted me with a list of supplies I'll apparently be needing at various points in the course. I have paper and pencils already, as well as a "marking pen" (a Sharpie) lying in my desk drawer. Pencil sharpener? Check. Erasers? Check! Graphite stick, masking tape, clips, drawing board, picture plane, viewfinders, mirror? Uhhh... I'll get to those when I need them.

Last session was all reading and rationale; this time, I actually got to pick up a pencil. The first exercises were three "pre-instruction" drawings, serving as a sort of warmup and as a record to look back on when my skills have hopefully advanced.

Assignment number one: draw a specific person from memory, #2 pencil on paper, paper on "drawing board". (The campus bookstore doesn't carry those, so a clipboard will have to do.) Since I'm posting these all on the wide open Web, I decided to go with a historical figure over someone from my life; that way, no one will have to be embarrassed by my butchering their likeness here. 😅

Honest Abe

Honest Abe. Right-hand source: Wikimedia Commons (public domain)

Half-way through sketching Mr. Lincoln, I realized I had no idea what to do about his suit, or clothes in general. At least I got the iconic beard and top hat in.

Assignment two: facing a mirror, prop the drawing board clipboard up between the wall and my lap and attempt a self-potrait.

Self-portrait, ish

A perfect likeness, no? Well, at least this is the "pre-instruction" stage.

I've read that hands and eyes are really hard to get right, but no one warned me how difficult mouths would be. When you really look at them, lips are these weird, squiggly, lumpy shapes that are hard to draw as you see them. After several attempts I was very tempted to just put a line through the middle of an oval and call it a day. I had a lot of trouble with the chin as well, trying to get it to "pop out" three dimensionally from the rest of the face. Another learning: it's tough to do shading without making it look like facial hair, especially if you want to include that little dip between the nose and mouth without it resembling a certain infamous toothbrush mustache.

Speaking of hand-drawing difficulty, assignment three was to set the pad on a desk and sketch my non-dominant hand. Behold:

Left hand

Of the three tasks, I'm surprised to say this is the one of which I'm the most proud, or at least the least embarrassed. Drawing in the thumb after the rest of the hand was particularly instructive. My initial instinct was to do another flat curve facing straight toward the viewer, as if my thumb weren't opposed to the rest of my hand. It took some slowing down and inspection to make it properly point sideways. I think it turned out better than the other fingers as a result.

After these exercises, the author touches on a few more points before the end of the chapter, including the idea of personal style in drawing. She writes that it shows through even in "realistic" drawing, where one is trying to hew closely to the world as it really appears. This is because style is more a reflection of how one perceives the world than idiosyncrasies in mechanical motions of the hand. She also compares it to a signature, presenting a set of sample signatures of the same name and encouraging the reader to make inferences about the different sorts of people who would produce each one.

And... wow, it's already midnight, isn't it? I suppose this will be going up on Monday, not Sunday after all. To end this off, here's a bonus doodle of "The Scream", but with a dragon:

Scream dragon

Until next time! Which will hopefully come sooner than the last one.

spindas: (raccoon)

Drawing is a skill I've wanted to develop ever since I was a kid sketching out imaginary video games in the backs of school notebooks. I never really "got around" to putting time and effort into formally studying it, until last year, when—according to Amazon—I purchased a copy of Drawing on the Right Side of the Brain by Betty Edwards, along with a big green sketch notebook and a set of art pencils.

Amazon

At the time I resolved to make time in my schedule for regular drawing practice, an hour or so a day, a couple of days a week. Reading stories from others around the web who taught themselves to draw, the most common advice I've seen is to just keep sinking time into it on a regular basis until your brain crosses some threshold of practice and starts to get better at the task. If I'd spent that year plugging away at this, maybe I'd have reached that point by now; unfortunately, apart from one or two initial sessions, my new supplies just sat on a shelf collecting dust.

I still want to learn to draw, though, so when 2016 became 2017 I told myself that I'd start that weekly regimen over again, Monday-Wednesday-Friday, and this time I'd commit to it. Of course it's now nearly the end of January and today is a Sunday, but "better late than never", or something.

I spent today's session working my way through the first two chapters of Drawing on the Right Side of the Brain, which form an introduction to the background, approach, and teaching methods of the book—so the pad and pencils sat untouched for this first hour. The author makes the case for learning the basics of drawing as a matter of building up perceptual skills, and that these skills can transfer to problem solving and reasoning in other areas beside art for art's sake. One page includes a series of sketches of mechanical designs, process diagrams, and so on excerpted from the notebooks of well-known scientists and engineers: this reminded me of how I'd struggled to come up with similar visualizations when trying to communicate design ideas to my high school robotics club, and how useful some basic sketching skills would have been.

Edwards emphasizes an instructional approach geared toward turning down the verbal, rational, explicit "left side" of the brain and shifting the task of drawing to the "right side", which handles visual perception and more intuitive, not-completely-conscious skills like riding a bike. I'm uncertain of the scientific basis for allocating these functions along an exact left/right division, but the notion works as a metaphor either way. She also downplays the idea of an innate, fixed level of talent and states, repeatedly, that her book is targeted toward "absolute beginners" with "low-level drawing skills and with high anxiety about their potential drawing ability"—which I find reassuring, because that's a pretty good description of me approaching this work.

Monday's session should involve some actual drawing exercises, but for now, here's a quick "before" doodle of a dragon and an owl.

Doodle

spindas: (raccoon)

A snapshot of the LiquidHaskell to-do list I keep in my TreeSheets planner, with an explanation of each item. Half blog update and half getting-my-thoughts-in-order.

  • Cabal/stack plugin: Live at spinda/liquidhaskell-cabal with a demo at spinda/liquidhaskell-cabal-demo.
    • Publish to Hackage: All ready on my end, but I’m currently blocked awaiting a new release of LiquidHaskell to Hackage containing the tweaks I had to make to the source to get this to work.
    • Improve LH’s CLI output: Right now LiquidHaskell’s output is pretty noisy, and that doesn’t fit in well with the rest of the Cabal output. I’ll work on improving this when I get a chance.
    • Improve multi-target support: The Cabal plugin uses LiquidHaskell’s existing support for checking the correctness of multiple modules, passing it all the source files in the project at once. At the moment the implementation of this is rather naive, rerunning the entire pipeline for each module and doing more recompiles than it needs to for each module. As part of the ApiAnnotations work (see below) I’m going to change this so we check all target modules in a single pass instead.

  • Ambiguity error in Bare: Really simple, there’s a spot where we’re resolving names and if there’s more than one possible match we pick the first one. Instead we should be throwing an error about ambiguity. See issue #525.
  • ApiAnnotations: Have LiquidHaskell extract comments via GHC’s ApiAnnotations interface instead of parsing them out ourselves; see issue #617. At the moment LH does a recursive parsing thing to identify all the modules and source files it needs to process, parses in all the specifications, and then loads the modules into GHC. I’ll need to change this so that the parsing happens after each module is loaded into GHC in order to use the ApiAnnotations interface. While I’m making this change I can improve our support for multiple target modules. This will also lay some groundwork for the .lqhi changes (see below).
  • Package DB version check: Issue #612 seems to have caused by a case where the version of GHC LiquidHaskell was compiled with was trying to load in an interface built by an older version of GHC. To guard against this I’ll add a check to make sure that the GHC version in the environment’s package database matches what we expect.
  • .lqhi intermediate files: In processing imported modules, LiquidHaskell needs access to information that is only available when GHC has just finished compiling that module. As a result, for each target module, it currently recompiles all the recursive dependencies of that modules every run to get at that information. In my GSoC ‘15 work I implemented a feature where this information is saved to intermediate .lqhi files after each modules is processed and quickly loaded back in when needed in a future run. This is currently stuck on my outdate fork, however, so I’ll need to port it over to the current version of the codebase.
    • Get it working with files first: The first iteration will create the .lqhi files and load them back in when they’re needed. This will require that LH’s specification-extraction pipeline process each module independently instead of smooshing them together and extracting everything all at once. The ApiAnnotations work will accomplish part of this.
    • Move to GHC Annotations: Once the GHC plugin is in (see below), transition this to store the intermediate information in GHC Annotations, inside the Haskell .hi files, instead.

  • GHC plugin: Another thing I did in my GSoC fork was make LiquidHaskell work as a GHC plugin instead of a standalone executable. I’d like to get that finally ported over and merged into master, as an alternative entry point instead of replacing the CLI interface altogether. I already have a work-in-progress copy of this that’s functioning in some cases, but it still has a ways to go (and needs the .lqhi stuff to be implemented) before it’ll be ready for real use.
    • Merge common code: My current version has its own modified copies of code used in the CLI interface. I need to unify these again and have them using a common interface/code.
    • Fix name resolution: There’s a bug where names in imported modules need to be fully qualified. I’ve solved this same bug before; I just need to track it down and do it again.
    • Unify with CLI: At some point I’d really like to turn the standalone CLI interface into a wrapper around the GHC plugin. Essentially it would run GHC on the target modules with the LH plugin enabled. Then the separate .lqhi files could disappear entirely in favor of information storage in annotations in the .hi files.
    • Testing: I need to get the GHC plugin integrated with our test suite, which is currently built around the CLI interface. The “Unify with CLI” step would remove the need for any extra work here.

  • Find and fix <interactive> error: There’s a pesky (GHC?) bug where, in certain cases, triggering an error on our end will cause GHC to try to read a non-existant file called <interactive> as it tries to generate the error message. I’ve tracked this down to an exception throw in a particular of code in GHC, but I’ll need to do a lot of walking up the stack from that exception to see what’s catching it and ultimately producing the error.
  • Fix test setup, drop idirs CLI flag: LiquidHaskell currently has a command line option to specify include directories for GHC to look in when compiling the module source. This shouldn’t really be separate from the --ghc-option flag we already support to pass options to GHC, but it’s necessary right now given the way the test suite is set up. Once the Cabal/stack/GHC plugin support is landed, I’d like to (at some point) rework the test suite to use one of those instead of the manual test runner, then drop support for this flag once nothing else needs it.
  • New parsing: A longer-term project is to replace the current parser, which has its fair share of weirdness and error cases, with a new version based on the one I implemented in my GSoC ‘15 work and using megaparsec instead of parsec. This is a slog to get through and very prone to inducing burn-out, so I’m prioritizing other things above it for now. But it’ll also be a big help, so I want to get it done eventually.
spindas: (raccoon)
spindas: (raccoon)

For whatever reason, Dreamwidth’s default rich text editor (built on an old version of FCKEditor) hides a lot of the available toolbar buttons. I’ve written a quick userscript that enables all available buttons in the editor, and also increases the height a bit to make writing more pleasant. I’ve tested it on Firefox with GreaseMonkey; presumably it should work on Chrome/Chromium as well.

// ==UserScript==
// @name        Better Dreamwidth Editor
// @namespace   betterdweditor
// @include     https://www.dreamwidth.org/update?usejournal=spindas
// @include     https://www.dreamwidth.org/editjournal*
// @version     1
// @grant       none
// ==/UserScript==

window.onload = function () {
  var frame = document.getElementById('draft___Frame');
  frame.onload = function () {
    document.getElementById('draft___Frame').style.height = '600px';
  };
  frame.src = 'https://www.dreamwidth.org/stc/fck/editor/fckeditor.html?InstanceName=draft&Toolbar=Default';
};
spindas: (raccoon)
Firewatch is a story-centric game by Campo Santo and Panic. I played it all the way through this weekend and had a really great time with it. Besides the top-notch voice acting, gorgeous visuals, and mystery-filled plot, one feature that’s been getting a lot of attention is the disposable camera the player carries through most of the game. Starting with 18 unused shots on the roll, you can fill it up with snaps of trees and trails and plenty of sunsets peaking over the hills. At the end they’re all uploaded to a personal page on Firewatch.camera, where you can share them or order prints from the "Fotodome".

Firewatch photo upload screen

At the end of my own playthrough, I uploaded my shots and logged onto the site to see them.

My camera roll

But one of them came out… weird.

Huh, that's weird

Strange distortion! And are those bits of QR codes in the corners? Could this be the beginnings of an ARG?!

I posted the Firewatch.camera link to r/Firewatch, a discussion board for the game on Reddit, to see what the resident internet detectives could make of it.

Reddit submission

Reddit discussion


Redditors sprang into action, loading the weird pic into Photoshop, filtering and cutting and turning it about in hopes of shaking out a secret.

Found a QR code!

My hunch was right—that was a QR code! Cleaned up and put back together, they found it led to…











The big secret

Reddit reacts (1)
Reddit reacts (2)
Reddit reacts (6)




Okay, maybe that's not exactly how it all happened.



Truth is, as I stared at the photo upload screen in a post-game slump, I got to wondering whether I could send in any old photo from my PC and have Panic print it off with the rest of the batch. I figured the game program probably sent the files to a hidden HTTP endpoint on firewatch.camera, so I broke out Charles Proxy, my tool of choice for HTTP interception. I like it because it’s easy to set up and supports SSL decryption, but other tools like mitmproxy could probably do the job as well. Looping back through the ending sequence again, I watched as my photos were retransmitted to The Cloud—this time with Charles listening in.



Charles picked up several requests to endpoints on https://www.firewatch.camera/api/v1/roll/. I reconstructed the first, to roll/create/form, with cURL on the command line:

curl -H "User-Agent: UnityPlayer/5.2.4f1 (http://unity3d.com)" \
     -H "X-Unity-Version: 5.2.4f1" \
     -X POST --data "email=bob@example.com" \
     https://www.firewatch.camera/api/v1/roll/create/form

The -H flags let us set custom headers so our request looks like it’s coming from Unity, the engine the game was built in. The next line sends along an email address (to which Firewatch Camera will send our photos page) with the HTTP POST request. Further experimentation revealed that this field can be left blank.

As luck would have it, the server was happy to handle my request, responding back:

EvergreenBasinDrive

My original Firewatch.camera page appeared at https://firewatch.camera/SoftAcadiaCamp/, so I inferred that create gives back a unique key for the new set of photos.

Mimicking the flow that Charles mapped out, the next step was to send in a JPEG image to roll/EvergreenBasinDrive/upload_photo:

curl -H "User-Agent: UnityPlayer/5.2.4f1 (http://unity3d.com)" \
     -H "X-Unity-Version: 5.2.4f1" \
     -X POST -F index=17 -F photo=@firewatch.jpg \
     https://www.firewatch.camera/api/v1/roll/EvergreenBasinDrive/upload_photo

The game submits photos in reverse order, with an index parameter starting at 17 and counting down to 0 for a full batch. (It won’t take more than 18 photos total—I’ve tried.) The -F photo=@firewatch.jpg tells cURL to attach the contents of firewatch.jpg on my system as a file upload called "photo".

In short order I heard back from the server again:

{

   
"status": "OK"
}

Alright! One photo submitted. A couple more and it was time to try out the last endpoint on the list, roll/EvergreenBasinDrive/complete. Again matching the Charles log:

curl -H "User-Agent: UnityPlayer/5.2.4f1 (http://unity3d.com)" \
     -H "X-Unity-Version: 5.2.4f1" \
     -X POST --data="success=1" \
     https://www.firewatch.camera/api/v1/roll/EvergreenBasinDrive/upload_photo

{

    "status": "OK"
}

 

Presto! My custom photoset was finalized and visible at https://www.firewatch.camera/EvergreenBasinDrive/.

Custom camera roll

Some more playing around along these lines let me suss out the remaining requirements and error cases in the API. I bundled this knowledge into a plug-and-play Node.js library and command line tool as a little demo.

node-firewatch.camera on GitHub


With that all done, we can get back to the original joke! Inspired by the internal name roll (short for "camera roll") used in the API, I:

  • grabbed my real Firewatch pics from their Firewatch.camera page,
  • made a Rick Roll QR code,
  • whipped up the fake, distorted shot in GIMP, splitting up the QR code and hiding it inside, and
  • used my new tool to upload it to a new Firewatch.camera page, alongside some of those photos I actually took in game.

All that remained was to stick the link on Reddit and let the fun unfold. 😉

spindas: (raccoon)

After doing Google Summer of Code last year, this is my first year of applying for internships. So I’m nervous-yet-excited, and am really hoping that resumes actually get looked at as I don’t have any “employers” to put down so far.

Places I’ve applied to:

  • Mozilla in Mountain View – open source, makers of Firefox and Rust! This is kind of my dream job.
  • Adobe in San Jose – they’re really easy to get to from home, which is great since I don’t have my driver’s license yet.
  • Yahoo in Sunnyvale – also very reachable.
  • Facebook in Mountain View – one at WhatsApp, where they use Erlang! And two elsewhere in the company.
    Lesson learned: if you have a Facebook account, make sure you log in before applying, and use Chrome – the “Skills” field is currently broken on Firefox (this seems to be fixed now). Also, some fields have character caps that aren't apparent unless you proceed to the next page and then go back.

Places I’ve looked at:

  • Jane Street – it’d be fun to work in a functional language like OCaml, but they only have positions in NYC, London, and Hong Kong.
  • Galois – another FP (Haskell) shop that’s too far away (Portland).
  • Google – they’re all closed up for the summer internships. I’ll have to apply earlier next year.
  • Nest – lots of positions open, but the closest office is in Palo Alto, which lacks good public transportation to and from San Jose.
  • Twitter – again, the nearest office is in-state but not close enough (San Francisco).

As it turns out, location is a major limiting factor for me. I definitely need to start driving soon.


spindas: (raccoon)
  • LiquidHaskell – I’m continuing to contribute to the LiquidHaskell project at UCSD, working to improve the overall usability of the system for real-world projects. Currently this means adapting the new parser I wrote a while back to the current state of the codebase, which has undergone some significant refactoring recently. I’m also implementing an “unparser” to go along with it – a pretty-printer exclusively for turning the AST back into formatted (and colorized!) code, whose output should be parseable back to an AST that is semantically equivalent to the input. Progress is here.
  • Unreal Engine – I’m learning/experimenting with Unreal Engine 4. I have two loosely-defined ideas of things I want to make, but I’m mostly exploring what the tools can do right now. This also has me dabbling in 3D modeling, 2D design, and animation. I’m posting little screenshots and video snippets to my Twitter account, which is actually being used for stuff now.
  • School – A lot of school! Classes, homework, studying. I miss winter break.
  • Internships – I applied to my first internship yesterday, at Mozilla! I'm hunting for more places to apply to right now.
  • New site – Not exactly something I'm “working on” right now, but I have a new design up for my personal page. I really like this one; it was a fun challenge to get the scattered links to position themselves nicely across different screen resolutions.
spindas: (raccoon)

My proposal for the Haskell's Google Summer of Code 2015 has been accepted! I'm super excited for this. There's a list of the other accepted proposals on reddit, and I've posted my full proposal as a Gist. I'll be working on embedding LiquidHaskell signatures in Haskell's native type system. I plan on posting more details about my work here on Dreamwidth as I go along. Woo!

spindas: (raccoon)
New Packages
Adopted Packages
spindas: (raccoon)
The existing package for the elementary Dark theme, aptly named elementary-dark-theme, is... really bad. It downloads the source archive over HTTP instead of HTTPS, doesn't use checksums to verify the download, gives the wrong source URL and license... So I put up elementary-dark, and filed a request to replace ("merge") the old package with the new one. After doing a lot of work with Debian packaging in the past, I was pleasantly surprised by how easy the PKGBUILD format is to work with.
spindas: (raccoon)

I'm working on integrating the Oculus Rift DK2 with the Ogre3D engine. Last night I got basic head tracking working: moving the DK2 in real life translates into camera movement in-game. The goal is to build a toolkit that makes building something for the Rift in Ogre super duper easy.

Page generated Aug. 18th, 2017 04:49 am
Powered by Dreamwidth Studios