I am attempting to post from my iPad for the very first time! This is not going to be a tremendously interesting post, but it sure is interesting to ME!
I am thinking today (as so many Canadians are) about the final Tragically Hip concert that will start in a little over an hour. One of the articles I read mentioned how unusual it seems for the very private Downie to publicize his diagnosis and contrasted that with how David Bowie handled his.
I found myself feeling very grateful for the opportunity to celebrate, thank, and grieve the forthcoming loss of this person who has contributed so much to the culture of our country. And I thought about some of the ‘farewell parties’ I’ve lately been hearing about – people who have terminal diagnoses participating in the celebration of their lives with their loved ones. There was even one portrayed on “Grace & Frankie.”
Death is hard for us to accept. We don’t like it and we prefer to not think about it or talk about it. But it is coming, for each of us, and despite what we imagine and hope and plan for the future, we don’t know when. By being open with his diagnosis of terminal brain cancer, Gord Downie has created space for us to talk about this most fundamentally shared human experience, and has in some ways held our hands as we look at it – frightened, angry, and sad. In some ways, it feels like another brilliant song he’s written to help us feel things that we couldn’t find our way to on our own.
I’m grateful for the example of a well-lived life. Thank you, Mr. Downie, for inviting us to your goodbye party and giving us the chance to say goodbye.
I was reading a recent post on machine learning from one of my favourite technical writers (Julia Evans) and was inspired enough to comment that I signed up for Disqus under my own name for a change 🙂
However, it turns out that the post was closed for comments, so I thought that I’d write up my thoughts here. I’m pretty busy today so I’m just going to paste it as a quote rather than edit it into a more ‘first-person’ writing style.
This is a pretty interesting topic overall – I have largely avoided machine learning b/c of a lot of concerns about how it’s used (and its results assumed to be ‘normal’ or ‘correct’) but recently decided that that’s the wrong approach. Instead, I’m going to learn more about it so I’m in a better position to critique how it’s used and point out implicit assumptions and biases. To that end, I’ve signed up for the Stanford course that just started.
Even in the first lesson in that course, I saw some interesting examples that made my skin crawl. I think the biggest issue I have with it is (and I am JUST starting to learn) is that there’s an implicit assumption that all relevant information is externally observable and that conclusions drawn from objectively measurable data/behaviour will be correct. I’m fine with that when it involves some kinds of events, but I get very uncomfortable when we’re applying it to humans. So much of human motivation is invisible/intuitive that leaning so heavily on machine learning (which necessarily relies on events that are can be observed by others & fed into algorithms) leads to things like, as you say, the Target pregnancy issue. There are many other ‘positive feedback’ effects of assuming that reinforcing/strengthening conclusions based on visibly available data that are detrimental – gender-segregated toys is a primary one. “65% of people buying for girls bought X, so when someone is shopping for a girl, we’ll suggest X. Look! Now 75% of people shopping for girls bought X – we were right! MOAR X for the girls!” [eventually, the ‘for girls’ section is nothing but X in various sizes, shapes, and colours (all colours a variant shade of pink)]
Another ML issue that came up for discussion when I was working at Amazon was: some people consider LGBTQ topics to be inappropriate for some readers, so even if someone had actively searched for LGBTQ fiction, the recommendation engine was instructed to NOT suggest those titles. That has the effect of erasing representation for marginalized people and increasing isolation among those who are already isolated. In fact, one could argue that one of the things that ML does best is erase the margins (obviously, depending on how it’s implemented, but in the rush to solve all problems with it, these types of questions seem to be ignored a lot).
I mentioned positive feedback loops before. The analogy I have in my head is: ML type algorithms (unless you build in randomness & jitter) amplify differences until you end up with a square wave – flat and of two polar opposite values. Negative feedback loops lead to dynamic yet stable equilibriums.
I mean, it’s obviously here to stay, and it clearly has some very significant beneficial use cases. But there are a lot of ethical questions that aren’t getting a lot of attention and I’d love to see more focus on that over the technical ones. Thank you for mentioning them 🙂
The more time I spend in this industry, the more I believe that the one Computational Ethics course I took back in my CS degree wasn’t nearly enough, and that we could really use a much broader conversation in that area. [To that end, I’ve also signed up for some philosophy courses to go with the ML one ;)]
Hello all! I moved my site to new hosting and didn’t know that the export wouldn’t bring the media with it. So there are a lot of broken link images. I’ll working on restoring them quickly, but your patience is appreciated (especially where screenshots are involved).
Have a great day!
I was in San Francisco yesterday, interviewing for a job with my ideal company. In the first interview, which was a more technically focused one, I noticed that I often prefaced what I was saying with “well, obviously” or just “obviously”. When I noticed that, I commented on it out loud: “hunh, I’m not sure why I’m saying that – I don’t know what’s obvious to other people – it’s all based on individual experience and familiarity with things.”
The fact that I thought “why am I saying that” led me to re-read Nick’s post “Why am I saying this?” and to really reflect on what the word ‘obviously’ means when I say it and what it might mean to others hearing it. When I was saying it, I wasn’t really meaning that what came next was necessarily obvious to anyone – myself included. It’s a verbal tic that I developed at some point along the way, and probably an inherently defensive/self-protective one. I don’t feel like delving into all the subtle biases and one-upmanship that often happens in tech, and how people who don’t look like what everyone thinks of when they think of someone in tech learn coping habits, but I suspect that’s where my ‘obviously’ comes from.
So let’s walk through what it means to me and what it might mean to others.
What is going through my head when I say ‘obviously’ at the beginning of a sentence?
What goes through my head when I hear ‘obviously’ at the beginning of a sentence? (putting myself in the listener’s place)
So where’s the upside in using ‘obviously’ word to introduce a statement? The only value seems to be protecting my own identity/ego. And since I’m working on not doing things to protect my identity/ego because that protection leads to lack of real communication and connection between people, any upside to me is heavily outweighed by the potential downside to others. Allowing myself to be vulnerable, and to be wrong, and to be seen being wrong is an important growth goal for me.
In fact, there seem to be a lot more obnoxious ways to use the word ‘obviously’ than helpful ones and I’m going to work on purging it from my spoken usage *. Even if something does seem obvious to me, what value in stating that? How does that help anyone with anything?
* I was trying to think of genuinely inoffensive ways to use the word and the best example I could come up with is play/stage direction notes and narrative:
But for first-person usage? I’m not sure it’s ever a good idea. [Maybe in cases where defensiveness is actually warranted (Barty Crouch has just accused you, Harry Potter, of casting the Dark Mark, I guess?)]
Before we get to the next season of Game of Thrones, I want to get my personal theory about what’s going on with Jon Snow published so I can point back at it and gloat later.
They all seem very convinced that what’s going to happen in Season 6 is that Melisandre will resurrect Jon Snow from his vicious murder by Ollie (a la Thoros of Myr resurrecting Beric Dondarrion). See, me, I don’t buy it. Not because I think Jon’s gonna stay dead, but because I don’t think he ever died in the first place.
GRRM’s text left it hanging, the same way he did when Arya was blinded, when Tyrion was supposedly drowned by the Stone Men, etc etc etc. Sure, Jon was stabbed. A lot. And in the show, sure, that was a pretty big pool of spreading blood and Kit Harington’s lovely blue eyes were open and staring up at the sky, unblinking. But y’know what keeps you from bleeding out so fast? Motherfucking freezing temperatures. I actually know this firsthand from an unfortunate slip & fall when I was in grade 1 in Saskatchewan. I had a pretty harsh cut on my little head that stayed frozen and contained until I got home and inside when it really started bleeding.
See, we know that sometimes, they come back wrong. Beric didn’t, but he didn’t come back fully right, either. Lady Stoneheart came back wrong (although, in my opinion, she wasn’t that right when she was alive either – her predecessor falls in the “Ron Weasley” category of fictional characters for whom I have very little sympathy). And if Jon is going to be the hero I think he will be, as important to the show/series as we all think he is, then it’s important that he is fully human, not a shade, not a zombie (whether an ice zombie or a fire zombie or whatever. A slush zombie). It’s crucial that he not have died.
So my money is on him walking up to the brink of death & not stepping off that cliff. Melisandre may help keep him from dying. But I unequivocally do not accept him dying & being resurrected. That breaks a fundamental part of what makes him a compelling character – he needs to be OUR hope, which means he needs to be human, like we are.
As someone who is easily distracted by brightly coloured shiny things, which is great when scuba diving in the tropics, but not so great when working on a pretty Retina Display Mac with all sorts of bouncing icons, infinite browser tabs, etc, I use the tools available to remove obstacles in my way. Or indeed, the ones NOT in my way but off to the side of where I want to be going but that are so tantalizing. One such tool is the open source Mac app SelfControl.
The basic functionality of SelfControl is that you set up either a blacklist of sites you don’t want to allow yourself to access for some amount of time or you go hardcore and set up a whitelist of sites that you WILl allow and ban the rest of the world as unacceptable distractions. [Aside: an anti-spam/anti-virus company I used to work for preferred the terms ‘blocklist’ and ‘allowlist’ instead of ‘blacklist’ and ‘whitelist’ for a variety of reasons, some cultural, and where I need to use them, I’ll be using those terms as well.]
With SelfControl, you decide how long you want to focus for, set the slider for that amount of time, and press ‘Start’.
The way it blocks sites is by modifying the Mac’s hosts file (and firewall) so it needs to use admin privileges, which is why you have to enter your password. For many, that’s a decent way of it asking “Are you sure?” because the average user isn’t going to know how to undo the changes manually – that’s part of why it’s effective.
And that’s a perfectly helpful use case: person says “I need to focus for 2 hours straight, and I’m my own worst enemy, so block distractions RIGHT NOW”. But it’s not the one I’m most interested in, personally.
You see, when there’s something that I’m particularly avoiding starting (usually writing), I won’t necessarily even get to the point of starting the app. There are different tiers of self control and what I’d like to do is set myself a regular schedule with blocks at certain times of day. There’s an argument to be made that if I can’t even boot the app & click the button, I have bigger problems to sort out, but if there’s a way to make the process more structured and automatic, I’d prefer that. And I’ve heard from others that they feel the same way – they think scheduling would be useful.
There are a number of things that I’ll need to work through to get that going (not least of which is figuring out how to automate the privilege escalation on a scheduled basis – maybe cron?) but the first hurdle I had was getting the app to build at all.
I had tried to do this about 8 months ago, with NO success. I haven’t been a Mac developer at all and until fairly recently, I hadn’t been a developer for over 10 years. A lot has changed, y’all! And one of the biggest changes has been the burgeoning mass of package managers to simplify installation of apps, libraries, etc. Although I use Homebrew to install apps on my Mac, I hadn’t heard of CocoaPods and didn’t know there was a required step to run ‘pod install’ to get the required prereq libraries installed in the build directory. [It’s useful to run your build instructions past true newbies to find the steps that are SO familiar/basic that it doesn’t occur to you to write them down].
At the Recurse Center, I learned about a lot more package managers, and that there’s at least one for every platform. I already knew about npm & gem, but not pip, and definitely not pod. So I realized that there was a missing step in the SelfControl instructions – one that would be so automatic for Mac app developers that they wouldn’t consider it missing, but for someone who wanted to start their Mac OSS development with SelfControl, it was pretty crucial. Now, I don’t want to suggest that I don’t know how to search for solutions to issues, nor that you don’t. But when you run up against something where you don’t have a reference point for what’s missing, the amount of the unknown is completely unbounded. You have no idea how far away the finish line is, and if your drive to do this is hobbyist-level, you may bail, like I did last summer.
So, armed with this new knowledge, I tried again to build SelfControl from scratch. I got a bunch of failures (including the promised code-signing ones) but some of them are due to a recent Ruby change that apparently breaks CocoaPods. This post is getting long, so here’s a link to how I got through ’em. It’s ALSO long but that’s largely due to a bucketload of screenshots.
This post documents the steps required to be able to clone and build SelfControl from absolute scratch – as in, you haven’t done any Mac development to speak of before at all *as of Feb 12, 2016*. I assume you’re running El Capitan.
The only prerequisite I’m going to assume you have is Xcode because:
(Edited: the above is not quite true. While writing the below I realized that I’m also assuming you are using Homebrew. Look, I’ve tried Macports, and I’ve tried Homebrew, and although there are a lot of pluses about Macports, the developer zeitgeist seems to be around Homebrew. It’s just easier, ok?)
Other than that, I’m assuming you don’t have any extra libraries or tools installed. The reason I’ve listed an exact date above is that I think that part of why this didn’t work for me but did for others is because of a recent change to Ruby that broke CocoaPods. So others couldn’t help me because either they weren’t using that version of Ruby or they’d already gotten their pods successfully installed; the SelfControl build wasn’t failing for them.
We’ll start with the official build instructions & go from there. I may submit a pull request to update some of the steps – step 2 is definitely wrong once you install the CocoaPods.
This is completely correct. However, if your goal is to contribute to SelfControl, you probably want to fork the repo and then clone your fork instead (after all, why are you building from scratch if you aren’t planning to contribute?). But for this walkthrough, you can just clone the official repo if you want. Go to the directory where you want to do your build and run:
git clone https://github.com/SelfControlApp/selfcontrol.git
If you forked the repo, you’d just replace it with your copy, e.g.
git clone https://github.com/karamcnair/selfcontrol.git
That’s what I did. Here’s what you’ll have in the directory after the clone.
Nope. I’m not sure if the instructions are just out of date or whether there are multiple paths to building this project with CocoaPods but if you follow the instructions directly and try to build the .xcodeproj file you’ll get missing dependencies because the CocoaPod libraries aren’t there.
So we’re going to take a detour from the official instructions at this point because this is what threw me off last summer and where we’re going off-road in our quest to get this working.
The way we do that is to use ‘gem’, the Ruby package manager. The process should be:
WARNING: CocoaPods is currently (remember, this post is as of 2016-02-12) in beta for a new version with a totally different Podfile format. At first I accepted their suggestion to upgrade to the beta version, ended up rewriting the file, and it STILL didn’t work. Do not accept the beta version at this point. That’s not what the problem is.
Note the error we’re getting here: Undefined method ‘to_ary’ ? ¯_(ツ)_/¯
And at the end of the output we see this:
Something that took me longer than I’m happy with to notice:
[But why would I even care? It knows what it’s doing, right?]
So what is the problem? How come everyone else building SelfControl doesn’t have this error. Well, it turns out that we’re too up to date, my friend! Our systems are too pristine, too fresh! We have the newest Ruby, the version that ships with OSX, and it’s version 2.3.0 (in my case) and not version 2.2. And there’s a problem with 2.3.x, as far as CocoaPods is concerned.
Let’s see if that’s the problem. What Ruby we have now?
Yup. That’s probably it. So how do we fix that? This helpful person has an answer for us:
To do that, we use these instructions and to install rvm (Ruby Version Manager) to tell OSX which version of Ruby we want to use. But before we do that, let’s get rid of our current (bad) install of CocoaPods.
I know I could have just had you start by using rvm to pick Ruby 2.2 so that you wouldn’t hit the failing pod install but there are two reasons I didn’t:
gem uninstall cocoapods
Then install RVM as per the instructions:
Huzzah! We have the right Ruby! Let’s see if we can fix the Pod Problem!
gem install cocoapods
Hey! look at that! It worked.
So here’s where something weird is, that I don’t want to take the time to fully replicate on a freshly installed machine: The first time I got this working, THIS was the output from the ‘pod install’ command:
Note that it says to use the .xcworkspace file, not the .xcproject file. And it’s telling the truth. My directory has a SelfControl.xcworkspace file in it after ‘pod install’ but it didn’t tell me to use it this time. But if I don’t, and use the SelfControl.xcproject file, here’s what Xcode complains about:
See? The pod dependencies aren’t built. So we use the Workspace instead:
Cool, cool. Now we’re actually getting somewhere. The pods are building and so is SelfControl! But. Here come the promised code signing issues.
OK, this post is too long now. And the code signing issues are ones that are likely common to multiple projects that have nothing to do with CocoaPods, so I’ll be adding a follow-on post just to walk through the OSX Code Signing traps!