I am still doing this, I swear 🙂 The mindfulness bell keeps turning itself off, which is odd. I haven’t figured out the right routine for me to remember to write every day but at least I remembered again.
Intention is still to take in the good. I think this is a good full-week intention. And I’m actually feeling pretty happy today (that’s a bit of a cheat, since it’s Friday).
I do have a 90 min float booked for this afternoon and then I have the evening alone because Bill is going out for dinner. I got the materials I needed to start on this course,
so I plan to get a nice warm fire going to ward off today’s downpour and play at being creative 🙂
Well, as expected, by about noon on Monday, I’d forgotten about my intention for the day. I did remember again yesterday though and was in the middle of typing up a post for the day (same intention) but I got derailed and then forgot about the whole thing even sooner!
I’m trying again this morning and I think I might set a reminder for about 2pm to poke myself again. The trouble is, I often get distracted so I set a LOT of reminders and now I’m in a ‘broken windows’ situation where I just ignore most of them. I could probably benefit from a reminder purge.
At any rate, I’m sticking with the the ‘take in the good’ intention today, and possibly for the whole week. I did feel like it was having a positive impact (at least in the moment) on Monday and it’s certainly an area I would benefit from more strength in.
I have been reading “Buddha’s Brain” lately and I think it would be useful for me to explicitly state an intention for each day.
I want to follow the exercises in this, and other books, but I don’t tend to have these things bubble up in my mind regularly. So I’m going to try getting into the habit of explicitly stating an intention publicly to see if that might help.
Today’s intention is “to take in the good”. The exercise is intended to help counter the negative bias that evolution has gifted ya with (in order to help us survive)
But apparently actively “soaking” in positive events can help shift one’s default reactions to situations and these days, I could really use that.
I am thinking today (as so many Canadians are) about the final Tragically Hip concert that will start in a little over an hour. One of the articles I read mentioned how unusual it seems for the very private Downie to publicize his diagnosis and contrasted that with how David Bowie handled his.
I found myself feeling very grateful for the opportunity to celebrate, thank, and grieve the forthcoming loss of this person who has contributed so much to the culture of our country. And I thought about some of the ‘farewell parties’ I’ve lately been hearing about – people who have terminal diagnoses participating in the celebration of their lives with their loved ones. There was even one portrayed on “Grace & Frankie.”
Death is hard for us to accept. We don’t like it and we prefer to not think about it or talk about it. But it is coming, for each of us, and despite what we imagine and hope and plan for the future, we don’t know when. By being open with his diagnosis of terminal brain cancer, Gord Downie has created space for us to talk about this most fundamentally shared human experience, and has in some ways held our hands as we look at it – frightened, angry, and sad. In some ways, it feels like another brilliant song he’s written to help us feel things that we couldn’t find our way to on our own.
I’m grateful for the example of a well-lived life. Thank you, Mr. Downie, for inviting us to your goodbye party and giving us the chance to say goodbye.
I was reading a recent post on machine learning from one of my favourite technical writers (Julia Evans) and was inspired enough to comment that I signed up for Disqus under my own name for a change 🙂
However, it turns out that the post was closed for comments, so I thought that I’d write up my thoughts here. I’m pretty busy today so I’m just going to paste it as a quote rather than edit it into a more ‘first-person’ writing style.
This is a pretty interesting topic overall – I have largely avoided machine learning b/c of a lot of concerns about how it’s used (and its results assumed to be ‘normal’ or ‘correct’) but recently decided that that’s the wrong approach. Instead, I’m going to learn more about it so I’m in a better position to critique how it’s used and point out implicit assumptions and biases. To that end, I’ve signed up for the Stanford course that just started.
Even in the first lesson in that course, I saw some interesting examples that made my skin crawl. I think the biggest issue I have with it is (and I am JUST starting to learn) is that there’s an implicit assumption that all relevant information is externally observable and that conclusions drawn from objectively measurable data/behaviour will be correct. I’m fine with that when it involves some kinds of events, but I get very uncomfortable when we’re applying it to humans. So much of human motivation is invisible/intuitive that leaning so heavily on machine learning (which necessarily relies on events that are can be observed by others & fed into algorithms) leads to things like, as you say, the Target pregnancy issue. There are many other ‘positive feedback’ effects of assuming that reinforcing/strengthening conclusions based on visibly available data that are detrimental – gender-segregated toys is a primary one. “65% of people buying for girls bought X, so when someone is shopping for a girl, we’ll suggest X. Look! Now 75% of people shopping for girls bought X – we were right! MOAR X for the girls!” [eventually, the ‘for girls’ section is nothing but X in various sizes, shapes, and colours (all colours a variant shade of pink)]
Another ML issue that came up for discussion when I was working at Amazon was: some people consider LGBTQ topics to be inappropriate for some readers, so even if someone had actively searched for LGBTQ fiction, the recommendation engine was instructed to NOT suggest those titles. That has the effect of erasing representation for marginalized people and increasing isolation among those who are already isolated. In fact, one could argue that one of the things that ML does best is erase the margins (obviously, depending on how it’s implemented, but in the rush to solve all problems with it, these types of questions seem to be ignored a lot).
I mentioned positive feedback loops before. The analogy I have in my head is: ML type algorithms (unless you build in randomness & jitter) amplify differences until you end up with a square wave – flat and of two polar opposite values. Negative feedback loops lead to dynamic yet stable equilibriums.
I mean, it’s obviously here to stay, and it clearly has some very significant beneficial use cases. But there are a lot of ethical questions that aren’t getting a lot of attention and I’d love to see more focus on that over the technical ones. Thank you for mentioning them 🙂
The more time I spend in this industry, the more I believe that the one Computational Ethics course I took back in my CS degree wasn’t nearly enough, and that we could really use a much broader conversation in that area. [To that end, I’ve also signed up for some philosophy courses to go with the ML one ;)]