1 00:00:01,760 --> 00:00:05,025 >> Hello. Welcome to 2 00:00:05,025 --> 00:00:07,320 Bridging the Verifiability Gap: Why we 3 00:00:07,320 --> 00:00:10,185 need more from our specs and how we can get it. 4 00:00:10,185 --> 00:00:12,270 My name is Jordan Halterman. 5 00:00:12,270 --> 00:00:14,700 I'm a member of the Technical Staff at 6 00:00:14,700 --> 00:00:16,950 the Open Networking Foundation 7 00:00:16,950 --> 00:00:21,060 where I focus on distributed systems. 8 00:00:21,060 --> 00:00:24,765 First, a brief overview of the talk. 9 00:00:24,765 --> 00:00:28,730 First, I'll talk a little bit about distributed systems at 10 00:00:28,730 --> 00:00:35,430 ONF and then talk about what I'm calling the verifiability gap. 11 00:00:36,350 --> 00:00:39,710 Then I'll talk about a couple of the approaches that we take 12 00:00:39,710 --> 00:00:44,360 to overcoming this verifiability gap, 13 00:00:44,360 --> 00:00:48,760 model-based trace checking and model-based conformance monitoring. 14 00:00:48,760 --> 00:00:54,780 Then finally, a little bit about what we learned along the way. 15 00:00:56,500 --> 00:01:00,210 Distributed systems at ONF. 16 00:01:01,640 --> 00:01:05,390 The Open Networking Foundation is an industry funded 17 00:01:05,390 --> 00:01:10,170 open source non-profit foundation. 18 00:01:11,840 --> 00:01:15,145 We're dedicated to bringing 19 00:01:15,145 --> 00:01:18,160 software defined networking technology to production in 20 00:01:18,160 --> 00:01:21,725 the industry from research. 21 00:01:21,725 --> 00:01:24,730 ONF has a small engineering staff, 22 00:01:24,730 --> 00:01:28,090 which is where I come from that works on 23 00:01:28,090 --> 00:01:33,035 a pretty large variety of really ambitious open source projects. 24 00:01:33,035 --> 00:01:35,710 The project that I work on in particular or 25 00:01:35,710 --> 00:01:38,365 that I was originally hired to work on, 26 00:01:38,365 --> 00:01:40,795 is a project called ONOS. 27 00:01:40,795 --> 00:01:46,430 ONOS is an open source software defined network controller. 28 00:01:47,120 --> 00:01:51,310 It was the first project created at ONF, 29 00:01:51,420 --> 00:01:54,430 and it was eventually brought into 30 00:01:54,430 --> 00:01:59,635 production a few years back at Comcast. 31 00:01:59,635 --> 00:02:04,415 What is a software defined network controller? 32 00:02:04,415 --> 00:02:09,495 The whole idea of software defined networking is to 33 00:02:09,495 --> 00:02:13,740 take network control out 34 00:02:13,740 --> 00:02:17,570 of proprietary hardware into a software layer. 35 00:02:17,570 --> 00:02:25,084 In a network, when you have a number of hardware switches, 36 00:02:25,084 --> 00:02:28,839 ONOS sits above the switches and 37 00:02:28,839 --> 00:02:33,194 controls switches to do things like packet forwarding, 38 00:02:33,194 --> 00:02:36,535 device configuration management and things like that. 39 00:02:36,535 --> 00:02:39,310 ONOS actually does this with several nodes, 40 00:02:39,310 --> 00:02:41,780 so it's actually a distributed system. 41 00:02:41,780 --> 00:02:44,505 ONOS nodes communicate with each other 42 00:02:44,505 --> 00:02:52,460 and they do so for scale and fault tolerance and things like that. 43 00:02:52,460 --> 00:02:55,280 Within the ONOS cluster, 44 00:02:55,280 --> 00:02:59,165 we run a bunch of different types of protocols like gossip, 45 00:02:59,165 --> 00:03:02,435 failure detection, consensus protocols, 46 00:03:02,435 --> 00:03:05,915 sharding, primary backup and things like that. 47 00:03:05,915 --> 00:03:13,030 ONOS actually is a pretty large and complex distributed system. 48 00:03:19,160 --> 00:03:22,485 That brings me to our use of 49 00:03:22,485 --> 00:03:26,690 TLA+ and what I'm calling the verifiability gap. 50 00:03:27,980 --> 00:03:32,080 In 2018, we began field trials of ONOS to bring it 51 00:03:32,080 --> 00:03:36,380 into production at Comcast. 52 00:03:37,160 --> 00:03:41,080 During this process, we've learned a lot of stuff. 53 00:03:41,080 --> 00:03:45,040 Production scale testing exposed a lot of bugs that 54 00:03:45,040 --> 00:03:49,470 we realize had been dormant in our system for years, 55 00:03:49,470 --> 00:03:51,340 but we had just never seen because we'd never 56 00:03:51,340 --> 00:03:53,935 tested it in quite the right way. 57 00:03:53,935 --> 00:03:59,720 I personally spent hours and often days scanning 58 00:04:00,020 --> 00:04:04,205 gigabytes size logs to try to 59 00:04:04,205 --> 00:04:05,990 understand traces and track down 60 00:04:05,990 --> 00:04:08,855 specific distributed systems bugs. 61 00:04:08,855 --> 00:04:12,535 But after lots and lots of this work, 62 00:04:12,535 --> 00:04:18,850 ONOS was eventually deployed nationwide at Comcast network. 63 00:04:19,240 --> 00:04:23,940 TLA+ was an important part of that process. 64 00:04:24,290 --> 00:04:27,160 During that process, we use TLA+ 65 00:04:27,160 --> 00:04:30,170 to design new distributed systems protocols, 66 00:04:30,170 --> 00:04:32,960 improve existing protocols and some examples 67 00:04:32,960 --> 00:04:35,870 of types of work that we did with TLA+, 68 00:04:35,870 --> 00:04:40,680 were we used it to extend the raft consensus protocol, 69 00:04:40,680 --> 00:04:45,230 used it to build new distributed locking algorithms, 70 00:04:45,230 --> 00:04:49,020 design custom primary backup protocols, 71 00:04:49,020 --> 00:04:52,670 and we've even used it for some research using 72 00:04:52,670 --> 00:04:55,850 software-defined networking techniques to 73 00:04:55,850 --> 00:04:58,800 build new consensus protocols. 74 00:04:59,170 --> 00:05:02,690 TLA+ was really critical in helping to validate 75 00:05:02,690 --> 00:05:05,330 solutions for a lot of bugs but we 76 00:05:05,330 --> 00:05:08,690 felt it could have been more effective if we'd been able 77 00:05:08,690 --> 00:05:13,230 to use it in the initial design of the system. 78 00:05:13,250 --> 00:05:17,655 Fortunately for us in 2018, 79 00:05:17,655 --> 00:05:21,370 the ONOS team began rewriting ONOS. 80 00:05:21,370 --> 00:05:26,619 Since the ONOS project had begun, 81 00:05:26,619 --> 00:05:31,309 we'd seen the rise of the Cloud and projects like Kubernetes, 82 00:05:31,309 --> 00:05:33,505 and so we wanted to rewrite ONOS, 83 00:05:33,505 --> 00:05:38,165 and so what this represented for me was opportunity. 84 00:05:38,165 --> 00:05:42,200 The opportunity was to try to rethink how we 85 00:05:42,200 --> 00:05:47,990 were designing and developing distributed systems in ONOS. 86 00:05:47,990 --> 00:05:50,900 When we began rewriting ONOS, 87 00:05:50,900 --> 00:05:54,080 we did so with a focus on testing and 88 00:05:54,080 --> 00:05:58,850 debugging infrastructure with the goal 89 00:05:58,850 --> 00:06:01,100 of answering a couple of questions. 90 00:06:01,100 --> 00:06:06,695 One was, how can we reduce the number of bugs in the system? 91 00:06:06,695 --> 00:06:09,380 The other was, how can we make 92 00:06:09,380 --> 00:06:13,410 debugging for bugs that do arise easier? 93 00:06:14,600 --> 00:06:20,090 The solution for us was a new commitment to TLA+ as well. 94 00:06:20,090 --> 00:06:23,030 From the start we began using TLA+ to 95 00:06:23,030 --> 00:06:27,625 design parts of the new ONOS system. 96 00:06:27,625 --> 00:06:33,515 We use it to document and verify the algorithms that we're using, 97 00:06:33,515 --> 00:06:36,200 and hoping that the specs that we wrote would provide 98 00:06:36,200 --> 00:06:40,280 a foundation for experimenting with enhancements in the future. 99 00:06:40,280 --> 00:06:43,800 So far in rewriting ONOS, 100 00:06:43,800 --> 00:06:47,615 we've used TLA+ to design new leader election algorithms, 101 00:06:47,615 --> 00:06:49,699 verify control loop logic, 102 00:06:49,699 --> 00:06:54,130 design distributed caches and more. 103 00:06:54,130 --> 00:06:58,590 But the problem that we ran into was, 104 00:06:58,590 --> 00:07:01,980 now we're using TLA+ for all this stuff, 105 00:07:01,980 --> 00:07:06,380 and so even if we know algorithms are correct, 106 00:07:06,380 --> 00:07:08,675 how do we know the code is correct? 107 00:07:08,675 --> 00:07:13,980 This is the verifiability gap that I am talking about. 108 00:07:14,750 --> 00:07:21,300 What would our ideal solution to this gap look like? 109 00:07:21,310 --> 00:07:24,845 Well, when we design a new algorithm with 110 00:07:24,845 --> 00:07:29,460 TLA+ and verify the algorithm with TLC, 111 00:07:29,460 --> 00:07:33,050 we thought we should be able to then implement 112 00:07:33,050 --> 00:07:38,125 the algorithm with whatever the language of our choice. 113 00:07:38,125 --> 00:07:41,360 After doing so, we should be able to verify 114 00:07:41,360 --> 00:07:45,100 the implementation using our TLA+ spec, 115 00:07:45,100 --> 00:07:47,495 and when bugs arise, 116 00:07:47,495 --> 00:07:50,405 debug the implementation using the TLA+ spec, 117 00:07:50,405 --> 00:07:53,810 which contains everything that 118 00:07:53,810 --> 00:07:56,305 we should need to be able to do that. 119 00:07:56,305 --> 00:07:59,370 Why do that with TLA+ though? 120 00:07:59,370 --> 00:08:04,985 Well, our algorithms are already specified in TLA+, 121 00:08:04,985 --> 00:08:09,480 and so using an alternative tool to do this would present 122 00:08:09,480 --> 00:08:14,150 the same problem as writing 123 00:08:14,150 --> 00:08:19,190 the implementation does in so far as trying to make 124 00:08:19,190 --> 00:08:21,530 sure that the implementation 125 00:08:21,530 --> 00:08:25,590 maintains consistency with the TLA+ spec. 126 00:08:29,180 --> 00:08:34,220 Being able to use TLA+ to verify the implementation could also 127 00:08:34,220 --> 00:08:37,400 help encourage engineers to 128 00:08:37,400 --> 00:08:41,130 use TLA+ to design new algorithms as well. 129 00:08:47,280 --> 00:08:51,010 We set out to explore some ways that we can close this gap 130 00:08:51,010 --> 00:08:54,445 and the first thing we looked at was model-based trace checking. 131 00:08:54,445 --> 00:08:57,250 The idea of model-based trace checking is 132 00:08:57,250 --> 00:09:00,070 that we would be able to run our application, 133 00:09:00,070 --> 00:09:04,645 log application traces in a structured format like JSON, 134 00:09:04,645 --> 00:09:09,730 consume that structure format into a TLA plus model, 135 00:09:09,730 --> 00:09:12,520 change the state and verify some invariants, 136 00:09:12,520 --> 00:09:14,965 so check that the actual implementation 137 00:09:14,965 --> 00:09:17,530 adheres to those invariants. 138 00:09:17,530 --> 00:09:21,025 To do this we actually built our own test framework. 139 00:09:21,025 --> 00:09:27,085 We have this test framework for testing Helm charts in Kubernetes. 140 00:09:27,085 --> 00:09:30,220 We have a test command that basically 141 00:09:30,220 --> 00:09:33,385 deploys a test coordinator inside Kubernetes. 142 00:09:33,385 --> 00:09:37,660 Kubernetes is a container orchestration platform, 143 00:09:37,660 --> 00:09:40,120 so it's a distributed system for running 144 00:09:40,120 --> 00:09:43,610 distributed systems it's not incredibly important here. 145 00:09:43,920 --> 00:09:48,790 But we deploy our tests coordinator, 146 00:09:48,790 --> 00:09:53,215 our coordinator deploys our distributed systems like Raft, ONOS. 147 00:09:53,215 --> 00:09:55,015 Then run some test, 148 00:09:55,015 --> 00:09:57,995 it tells those distributed systems to do things. 149 00:09:57,995 --> 00:10:00,285 They interact with each other 150 00:10:00,285 --> 00:10:02,580 and then finally eventually they send 151 00:10:02,580 --> 00:10:05,160 logs back to the coordinator where they 152 00:10:05,160 --> 00:10:08,430 get aggregated and sent back to the client. 153 00:10:08,430 --> 00:10:16,885 This is where we can run TLC to do trace checking with TLA plus. 154 00:10:16,885 --> 00:10:22,510 This is what an example spec looks like to do trace checking. 155 00:10:22,510 --> 00:10:27,970 You see that we imported this module called trace. 156 00:10:27,970 --> 00:10:31,480 The trace module is basically what reads 157 00:10:31,480 --> 00:10:35,695 traces from some structured format like JSON. 158 00:10:35,695 --> 00:10:39,040 You see in the next state relation, 159 00:10:39,040 --> 00:10:42,940 we're basically reading a next structured trace 160 00:10:42,940 --> 00:10:44,140 and then doing something with it. 161 00:10:44,140 --> 00:10:49,495 In this case we're testing a distributed cache. 162 00:10:49,495 --> 00:10:53,140 We basically read a trace record it 163 00:10:53,140 --> 00:10:56,275 in a history variable and then there's 164 00:10:56,275 --> 00:10:59,650 an invariant that basically it's 165 00:10:59,650 --> 00:11:04,520 checking that a client never sees history go back in time. 166 00:11:07,470 --> 00:11:10,090 This seem to work really great for 167 00:11:10,090 --> 00:11:12,730 a client-centric consistency model like this, 168 00:11:12,730 --> 00:11:15,820 where we're basically checking that the client is seeing 169 00:11:15,820 --> 00:11:19,750 some specific behavior and 170 00:11:19,750 --> 00:11:22,825 more importantly not seeing some other behavior. 171 00:11:22,825 --> 00:11:26,440 But it still wasn't obvious how to ensure that 172 00:11:26,440 --> 00:11:28,375 the code correctly implements 173 00:11:28,375 --> 00:11:32,420 every single step of the spec internally. 174 00:11:32,880 --> 00:11:37,660 This feels like it depends on our ability to trigger bugs 175 00:11:37,660 --> 00:11:39,250 and we're not really too competent that 176 00:11:39,250 --> 00:11:42,580 because as our production experience showed, 177 00:11:42,580 --> 00:11:46,660 we see a lot of bugs in production 178 00:11:46,660 --> 00:11:48,535 that we seem to be unable to 179 00:11:48,535 --> 00:11:51,265 trigger ourselves in testing environments. 180 00:11:51,265 --> 00:11:54,970 We'd really like to be able to detect bugs when they happen, 181 00:11:54,970 --> 00:11:58,880 rather than relying on our ability to make them occur. 182 00:12:02,730 --> 00:12:05,680 That brings us to the second approach we took 183 00:12:05,680 --> 00:12:08,140 to verifying our implementations, 184 00:12:08,140 --> 00:12:11,275 which is model-based conformance monitoring. 185 00:12:11,275 --> 00:12:13,420 The idea of conformance monitoring 186 00:12:13,420 --> 00:12:15,670 is it's very similar to trace checking, 187 00:12:15,670 --> 00:12:18,970 except we're doing it in real time so 188 00:12:18,970 --> 00:12:23,870 that we can try to catch those bugs that happen in production. 189 00:12:23,910 --> 00:12:27,130 The idea is just that the application 190 00:12:27,130 --> 00:12:29,050 logs traces to the stream instead of 191 00:12:29,050 --> 00:12:33,745 just to some file that's later processed in batch. 192 00:12:33,745 --> 00:12:38,545 Again, the TLC process consumes the traces from the stream, 193 00:12:38,545 --> 00:12:40,090 updates the model state, 194 00:12:40,090 --> 00:12:44,110 checks invariants and with an added feature of being able to 195 00:12:44,110 --> 00:12:46,810 alert when an invariant is violated since 196 00:12:46,810 --> 00:12:50,350 we're processing in an infinite stream of traces. 197 00:12:50,350 --> 00:12:52,630 To do this, we actually updated 198 00:12:52,630 --> 00:12:57,430 our test infrastructure and we added a system called Kafka. 199 00:12:57,430 --> 00:13:02,380 Kafka is the system that ONOS now writes its traces to. 200 00:13:02,380 --> 00:13:08,094 A Kafka, it's actually a distributed fault tolerant log, 201 00:13:08,094 --> 00:13:11,590 which in this architecture acts as a buffer for 202 00:13:11,590 --> 00:13:14,020 the traces coming out of ONOS and then 203 00:13:14,020 --> 00:13:17,960 going into the model checker. 204 00:13:18,900 --> 00:13:22,945 Then the model checker runs on the other side of Kafka. 205 00:13:22,945 --> 00:13:25,945 There's a model input to it, it reads streams, 206 00:13:25,945 --> 00:13:28,480 it reads traces as fast as it can 207 00:13:28,480 --> 00:13:32,725 while the other side writes traces as fast as it can. 208 00:13:32,725 --> 00:13:35,650 Then TLC does this thing, 209 00:13:35,650 --> 00:13:38,770 which it does great, and this is 210 00:13:38,770 --> 00:13:42,155 what the conformance monitoring spec looks like. 211 00:13:42,155 --> 00:13:48,220 For the same distributed caching algorithm, 212 00:13:49,280 --> 00:13:51,855 let's look at it up close. 213 00:13:51,855 --> 00:13:54,150 There are just a couple of differences. 214 00:13:54,150 --> 00:13:57,555 We're again, tracking the offset. 215 00:13:57,555 --> 00:14:02,535 In this case the offset is basically an offset in Kafka. 216 00:14:02,535 --> 00:14:05,295 We've added this trace operator. 217 00:14:05,295 --> 00:14:08,195 What the trace operator does is given an offset, 218 00:14:08,195 --> 00:14:11,830 it basically either if the offset 219 00:14:11,830 --> 00:14:15,625 is available in Kafka in the buffer, it will read it. 220 00:14:15,625 --> 00:14:18,730 Otherwise it will block and wait for 221 00:14:18,730 --> 00:14:23,060 the next trace to become available so it can evaluate it. 222 00:14:23,850 --> 00:14:29,275 Again, we're just checking the same invariants, 223 00:14:29,275 --> 00:14:32,305 basically verifying that the history will not go back in time, 224 00:14:32,305 --> 00:14:35,814 but in this case, we've also added an alert operator. 225 00:14:35,814 --> 00:14:37,960 The alert operator basically says, 226 00:14:37,960 --> 00:14:40,000 when this invariant is violated, 227 00:14:40,000 --> 00:14:43,510 we can write a message back to Kafka so it can go to 228 00:14:43,510 --> 00:14:48,500 some monitoring system or some other place like that. 229 00:14:52,140 --> 00:14:54,760 There are some challenges with this 230 00:14:54,760 --> 00:14:58,150 that we are still working to try to overcome. 231 00:14:58,150 --> 00:15:00,880 One is it's really difficult to figure out how to 232 00:15:00,880 --> 00:15:04,120 limit the size of the trace in an infinite stream. 233 00:15:04,120 --> 00:15:05,875 If we want to be able to, 234 00:15:05,875 --> 00:15:07,990 when an invariant is violated, 235 00:15:07,990 --> 00:15:10,750 identify the trace that led to it, 236 00:15:10,750 --> 00:15:12,910 we're basically dealing with an infinite trace, 237 00:15:12,910 --> 00:15:18,265 which becomes very difficult to figure out how to handle. 238 00:15:18,265 --> 00:15:22,915 Then there are obviously ordering issues in a distributed system. 239 00:15:22,915 --> 00:15:27,400 Our test model was easy to work with because 240 00:15:27,400 --> 00:15:34,480 it didn't require some order across processes. 241 00:15:34,480 --> 00:15:37,885 Basically, we're just establishing that the history 242 00:15:37,885 --> 00:15:42,640 is ordered within some single-process, 243 00:15:42,640 --> 00:15:44,470 but it becomes a much bigger problem 244 00:15:44,470 --> 00:15:48,020 when you need to order across processes. 245 00:15:48,510 --> 00:15:52,960 Obviously, one way you'd have to do that is perhaps rely on 246 00:15:52,960 --> 00:15:56,800 timestamps for ordering across processes which isn't always ideal. 247 00:15:56,800 --> 00:15:58,810 But maybe it's okay for a model checking or 248 00:15:58,810 --> 00:16:02,065 for basically a monitoring system like this. 249 00:16:02,065 --> 00:16:05,560 Again, it feels like it works best for 250 00:16:05,560 --> 00:16:10,165 a client-centric consistency model like the one in our example. 251 00:16:10,165 --> 00:16:13,090 But there may be some ways to improve it 252 00:16:13,090 --> 00:16:15,415 so you can do ordering across processes. 253 00:16:15,415 --> 00:16:19,255 Things like clock synchronization protocols are getting much 254 00:16:19,255 --> 00:16:23,290 more accurate and they could be really useful in this context. 255 00:16:23,290 --> 00:16:25,555 But you'd still need a sorting step, 256 00:16:25,555 --> 00:16:29,500 which it just complicates the architecture, 257 00:16:29,500 --> 00:16:32,210 but it seems like it could be possible. 258 00:16:36,030 --> 00:16:40,820 That finally brings me to what we learned along the way. 259 00:16:42,420 --> 00:16:44,695 I think what we learned is that it's 260 00:16:44,695 --> 00:16:46,750 generally possible to use TLA plus to 261 00:16:46,750 --> 00:16:51,010 check traces and we're fond of the approach. 262 00:16:51,010 --> 00:16:53,650 But like I've said a couple of times, 263 00:16:53,650 --> 00:16:56,110 it seems to be easier to test 264 00:16:56,110 --> 00:16:58,825 local invariance than global invariance, 265 00:16:58,825 --> 00:17:00,760 which might have to do with ordering 266 00:17:00,760 --> 00:17:04,100 issues across multiple processes. 267 00:17:05,130 --> 00:17:09,685 But unfortunately, we weren't really able to 268 00:17:09,685 --> 00:17:13,360 do something like trace checking using our original specs, 269 00:17:13,360 --> 00:17:16,300 which was our dream to build or 270 00:17:16,300 --> 00:17:19,510 write a spec and then be able to use it to verify some system, 271 00:17:19,510 --> 00:17:22,580 but it still require some modification. 272 00:17:22,980 --> 00:17:25,690 But the modularity of TLA plus 273 00:17:25,690 --> 00:17:27,700 does allow for specs to share logic. 274 00:17:27,700 --> 00:17:29,710 So you can actually organize 275 00:17:29,710 --> 00:17:33,010 specs in a way where this can be done cleanly, 276 00:17:33,010 --> 00:17:36,950 where you have both the portion of the spec that you use 277 00:17:36,950 --> 00:17:38,765 for original model checking and then 278 00:17:38,765 --> 00:17:42,080 another module that you use for trace checking. 279 00:17:42,870 --> 00:17:45,650 But we still see significant value 280 00:17:45,650 --> 00:17:47,660 in trace checking with TLA plus, 281 00:17:47,660 --> 00:17:51,020 I think we've had a lot of success in using it 282 00:17:51,020 --> 00:17:55,535 the way that we use it in our examples here. 283 00:17:55,535 --> 00:17:59,840 I think we will continue to do that to 284 00:17:59,840 --> 00:18:05,540 verify things like API guarantees inside of ONOS. 285 00:18:05,820 --> 00:18:09,110 But we still haven't really figured 286 00:18:09,110 --> 00:18:11,700 out how to use it to verify internal implementations. 287 00:18:11,700 --> 00:18:14,370 So maybe other approaches like 288 00:18:14,370 --> 00:18:16,700 test case generation and things like 289 00:18:16,700 --> 00:18:19,545 that are better in this respect. 290 00:18:19,545 --> 00:18:23,095 But by making it part of our infrastructure, 291 00:18:23,095 --> 00:18:24,810 which we are still going to do, 292 00:18:24,810 --> 00:18:26,960 we hope to be able 293 00:18:26,960 --> 00:18:31,920 to use it to detect bugs before they're seen in production. 294 00:18:31,920 --> 00:18:35,450 Then we hope to learn to be able to use it 295 00:18:35,450 --> 00:18:39,855 to reduce the efforts required to debug systems. 296 00:18:39,855 --> 00:18:43,070 That experience will help us find 297 00:18:43,070 --> 00:18:48,270 new ways to generalize this approach and make it more usable.