Print this page
5042 stop using deprecated atomic functions
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/uts/common/os/cyclic.c
+++ new/usr/src/uts/common/os/cyclic.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
13 13 * When distributing Covered Code, include this CDDL HEADER in each
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21 /*
22 22 * Copyright 2008 Sun Microsystems, Inc. All rights reserved.
23 23 * Use is subject to license terms.
24 24 */
25 25
26 26 /*
27 27 * Copyright (c) 2012, Joyent Inc. All rights reserved.
28 28 */
29 29
30 30 /*
31 31 * The Cyclic Subsystem
32 32 * --------------------
33 33 *
34 34 * Prehistory
35 35 *
36 36 * Historically, most computer architectures have specified interval-based
37 37 * timer parts (e.g. SPARCstation's counter/timer; Intel's i8254). While
38 38 * these parts deal in relative (i.e. not absolute) time values, they are
39 39 * typically used by the operating system to implement the abstraction of
40 40 * absolute time. As a result, these parts cannot typically be reprogrammed
41 41 * without introducing error in the system's notion of time.
42 42 *
43 43 * Starting in about 1994, chip architectures began specifying high resolution
44 44 * timestamp registers. As of this writing (1999), all major chip families
45 45 * (UltraSPARC, PentiumPro, MIPS, PowerPC, Alpha) have high resolution
46 46 * timestamp registers, and two (UltraSPARC and MIPS) have added the capacity
47 47 * to interrupt based on timestamp values. These timestamp-compare registers
48 48 * present a time-based interrupt source which can be reprogrammed arbitrarily
49 49 * often without introducing error. Given the low cost of implementing such a
50 50 * timestamp-compare register (and the tangible benefit of eliminating
51 51 * discrete timer parts), it is reasonable to expect that future chip
52 52 * architectures will adopt this feature.
53 53 *
54 54 * The cyclic subsystem has been designed to take advantage of chip
55 55 * architectures with the capacity to interrupt based on absolute, high
56 56 * resolution values of time.
57 57 *
58 58 * Subsystem Overview
59 59 *
60 60 * The cyclic subsystem is a low-level kernel subsystem designed to provide
61 61 * arbitrarily high resolution, per-CPU interval timers (to avoid colliding
62 62 * with existing terms, we dub such an interval timer a "cyclic"). Cyclics
63 63 * can be specified to fire at high, lock or low interrupt level, and may be
64 64 * optionally bound to a CPU or a CPU partition. A cyclic's CPU or CPU
65 65 * partition binding may be changed dynamically; the cyclic will be "juggled"
66 66 * to a CPU which satisfies the new binding. Alternatively, a cyclic may
67 67 * be specified to be "omnipresent", denoting firing on all online CPUs.
68 68 *
69 69 * Cyclic Subsystem Interface Overview
70 70 * -----------------------------------
71 71 *
72 72 * The cyclic subsystem has interfaces with the kernel at-large, with other
73 73 * kernel subsystems (e.g. the processor management subsystem, the checkpoint
74 74 * resume subsystem) and with the platform (the cyclic backend). Each
75 75 * of these interfaces is given a brief synopsis here, and is described
76 76 * in full above the interface's implementation.
77 77 *
78 78 * The following diagram displays the cyclic subsystem's interfaces to
79 79 * other kernel components. The arrows denote a "calls" relationship, with
80 80 * the large arrow indicating the cyclic subsystem's consumer interface.
81 81 * Each arrow is labeled with the section in which the corresponding
82 82 * interface is described.
83 83 *
84 84 * Kernel at-large consumers
85 85 * -----------++------------
86 86 * ||
87 87 * ||
88 88 * _||_
89 89 * \ /
90 90 * \/
91 91 * +---------------------+
92 92 * | |
93 93 * | Cyclic subsystem |<----------- Other kernel subsystems
94 94 * | |
95 95 * +---------------------+
96 96 * ^ |
97 97 * | |
98 98 * | |
99 99 * | v
100 100 * +---------------------+
101 101 * | |
102 102 * | Cyclic backend |
103 103 * | (platform specific) |
104 104 * | |
105 105 * +---------------------+
106 106 *
107 107 *
108 108 * Kernel At-Large Interfaces
109 109 *
110 110 * cyclic_add() <-- Creates a cyclic
111 111 * cyclic_add_omni() <-- Creates an omnipresent cyclic
112 112 * cyclic_remove() <-- Removes a cyclic
113 113 * cyclic_bind() <-- Change a cyclic's CPU or partition binding
114 114 * cyclic_reprogram() <-- Reprogram a cyclic's expiration
115 115 *
116 116 * Inter-subsystem Interfaces
117 117 *
118 118 * cyclic_juggle() <-- Juggles cyclics away from a CPU
119 119 * cyclic_offline() <-- Offlines cyclic operation on a CPU
120 120 * cyclic_online() <-- Reenables operation on an offlined CPU
121 121 * cyclic_move_in() <-- Notifies subsystem of change in CPU partition
122 122 * cyclic_move_out() <-- Notifies subsystem of change in CPU partition
123 123 * cyclic_suspend() <-- Suspends the cyclic subsystem on all CPUs
124 124 * cyclic_resume() <-- Resumes the cyclic subsystem on all CPUs
125 125 *
126 126 * Backend Interfaces
127 127 *
128 128 * cyclic_init() <-- Initializes the cyclic subsystem
129 129 * cyclic_fire() <-- CY_HIGH_LEVEL interrupt entry point
130 130 * cyclic_softint() <-- CY_LOCK/LOW_LEVEL soft interrupt entry point
131 131 *
132 132 * The backend-supplied interfaces (through the cyc_backend structure) are
133 133 * documented in detail in <sys/cyclic_impl.h>
134 134 *
135 135 *
136 136 * Cyclic Subsystem Implementation Overview
137 137 * ----------------------------------------
138 138 *
139 139 * The cyclic subsystem is designed to minimize interference between cyclics
140 140 * on different CPUs. Thus, all of the cyclic subsystem's data structures
141 141 * hang off of a per-CPU structure, cyc_cpu.
142 142 *
143 143 * Each cyc_cpu has a power-of-two sized array of cyclic structures (the
144 144 * cyp_cyclics member of the cyc_cpu structure). If cyclic_add() is called
145 145 * and there does not exist a free slot in the cyp_cyclics array, the size of
146 146 * the array will be doubled. The array will never shrink. Cyclics are
147 147 * referred to by their index in the cyp_cyclics array, which is of type
148 148 * cyc_index_t.
149 149 *
150 150 * The cyclics are kept sorted by expiration time in the cyc_cpu's heap. The
151 151 * heap is keyed by cyclic expiration time, with parents expiring earlier
152 152 * than their children.
153 153 *
154 154 * Heap Management
155 155 *
156 156 * The heap is managed primarily by cyclic_fire(). Upon entry, cyclic_fire()
157 157 * compares the root cyclic's expiration time to the current time. If the
158 158 * expiration time is in the past, cyclic_expire() is called on the root
159 159 * cyclic. Upon return from cyclic_expire(), the cyclic's new expiration time
160 160 * is derived by adding its interval to its old expiration time, and a
161 161 * downheap operation is performed. After the downheap, cyclic_fire()
162 162 * examines the (potentially changed) root cyclic, repeating the
163 163 * cyclic_expire()/add interval/cyclic_downheap() sequence until the root
164 164 * cyclic has an expiration time in the future. This expiration time
165 165 * (guaranteed to be the earliest in the heap) is then communicated to the
166 166 * backend via cyb_reprogram. Optimal backends will next call cyclic_fire()
167 167 * shortly after the root cyclic's expiration time.
168 168 *
169 169 * To allow efficient, deterministic downheap operations, we implement the
170 170 * heap as an array (the cyp_heap member of the cyc_cpu structure), with each
171 171 * element containing an index into the CPU's cyp_cyclics array.
172 172 *
173 173 * The heap is laid out in the array according to the following:
174 174 *
175 175 * 1. The root of the heap is always in the 0th element of the heap array
176 176 * 2. The left and right children of the nth element are element
177 177 * (((n + 1) << 1) - 1) and element ((n + 1) << 1), respectively.
178 178 *
179 179 * This layout is standard (see, e.g., Cormen's "Algorithms"); the proof
180 180 * that these constraints correctly lay out a heap (or indeed, any binary
181 181 * tree) is trivial and left to the reader.
182 182 *
183 183 * To see the heap by example, assume our cyclics array has the following
184 184 * members (at time t):
185 185 *
186 186 * cy_handler cy_level cy_expire
187 187 * ---------------------------------------------
188 188 * [ 0] clock() LOCK t+10000000
189 189 * [ 1] deadman() HIGH t+1000000000
190 190 * [ 2] clock_highres_fire() LOW t+100
191 191 * [ 3] clock_highres_fire() LOW t+1000
192 192 * [ 4] clock_highres_fire() LOW t+500
193 193 * [ 5] (free) -- --
194 194 * [ 6] (free) -- --
195 195 * [ 7] (free) -- --
196 196 *
197 197 * The heap array could be:
198 198 *
199 199 * [0] [1] [2] [3] [4] [5] [6] [7]
200 200 * +-----+-----+-----+-----+-----+-----+-----+-----+
201 201 * | | | | | | | | |
202 202 * | 2 | 3 | 4 | 0 | 1 | x | x | x |
203 203 * | | | | | | | | |
204 204 * +-----+-----+-----+-----+-----+-----+-----+-----+
205 205 *
206 206 * Graphically, this array corresponds to the following (excuse the ASCII art):
207 207 *
208 208 * 2
209 209 * |
210 210 * +------------------+------------------+
211 211 * 3 4
212 212 * |
213 213 * +---------+--------+
214 214 * 0 1
215 215 *
216 216 * Note that the heap is laid out by layer: all nodes at a given depth are
217 217 * stored in consecutive elements of the array. Moreover, layers of
218 218 * consecutive depths are in adjacent element ranges. This property
219 219 * guarantees high locality of reference during downheap operations.
220 220 * Specifically, we are guaranteed that we can downheap to a depth of
221 221 *
222 222 * lg (cache_line_size / sizeof (cyc_index_t))
223 223 *
224 224 * nodes with at most one cache miss. On UltraSPARC (64 byte e-cache line
225 225 * size), this corresponds to a depth of four nodes. Thus, if there are
226 226 * fewer than sixteen cyclics in the heap, downheaps on UltraSPARC miss at
227 227 * most once in the e-cache.
228 228 *
229 229 * Downheaps are required to compare siblings as they proceed down the
230 230 * heap. For downheaps proceeding beyond the one-cache-miss depth, every
231 231 * access to a left child could potentially miss in the cache. However,
232 232 * if we assume
233 233 *
234 234 * (cache_line_size / sizeof (cyc_index_t)) > 2,
235 235 *
236 236 * then all siblings are guaranteed to be on the same cache line. Thus, the
237 237 * miss on the left child will guarantee a hit on the right child; downheaps
238 238 * will incur at most one cache miss per layer beyond the one-cache-miss
239 239 * depth. The total number of cache misses for heap management during a
240 240 * downheap operation is thus bounded by
241 241 *
242 242 * lg (n) - lg (cache_line_size / sizeof (cyc_index_t))
243 243 *
244 244 * Traditional pointer-based heaps are implemented without regard to
245 245 * locality. Downheaps can thus incur two cache misses per layer (one for
246 246 * each child), but at most one cache miss at the root. This yields a bound
247 247 * of
248 248 *
249 249 * 2 * lg (n) - 1
250 250 *
251 251 * on the total cache misses.
252 252 *
253 253 * This difference may seem theoretically trivial (the difference is, after
254 254 * all, constant), but can become substantial in practice -- especially for
255 255 * caches with very large cache lines and high miss penalties (e.g. TLBs).
256 256 *
257 257 * Heaps must always be full, balanced trees. Heap management must therefore
258 258 * track the next point-of-insertion into the heap. In pointer-based heaps,
259 259 * recomputing this point takes O(lg (n)). Given the layout of the
260 260 * array-based implementation, however, the next point-of-insertion is
261 261 * always:
262 262 *
263 263 * heap[number_of_elements]
264 264 *
265 265 * We exploit this property by implementing the free-list in the usused
266 266 * heap elements. Heap insertion, therefore, consists only of filling in
267 267 * the cyclic at cyp_cyclics[cyp_heap[number_of_elements]], incrementing
268 268 * the number of elements, and performing an upheap. Heap deletion consists
269 269 * of decrementing the number of elements, swapping the to-be-deleted element
270 270 * with the element at cyp_heap[number_of_elements], and downheaping.
271 271 *
272 272 * Filling in more details in our earlier example:
273 273 *
274 274 * +--- free list head
275 275 * |
276 276 * V
277 277 *
278 278 * [0] [1] [2] [3] [4] [5] [6] [7]
279 279 * +-----+-----+-----+-----+-----+-----+-----+-----+
280 280 * | | | | | | | | |
281 281 * | 2 | 3 | 4 | 0 | 1 | 5 | 6 | 7 |
282 282 * | | | | | | | | |
283 283 * +-----+-----+-----+-----+-----+-----+-----+-----+
284 284 *
285 285 * To insert into this heap, we would just need to fill in the cyclic at
286 286 * cyp_cyclics[5], bump the number of elements (from 5 to 6) and perform
287 287 * an upheap.
288 288 *
289 289 * If we wanted to remove, say, cyp_cyclics[3], we would first scan for it
290 290 * in the cyp_heap, and discover it at cyp_heap[1]. We would then decrement
291 291 * the number of elements (from 5 to 4), swap cyp_heap[1] with cyp_heap[4],
292 292 * and perform a downheap from cyp_heap[1]. The linear scan is required
293 293 * because the cyclic does not keep a backpointer into the heap. This makes
294 294 * heap manipulation (e.g. downheaps) faster at the expense of removal
295 295 * operations.
296 296 *
297 297 * Expiry processing
298 298 *
299 299 * As alluded to above, cyclic_expire() is called by cyclic_fire() at
300 300 * CY_HIGH_LEVEL to expire a cyclic. Cyclic subsystem consumers are
301 301 * guaranteed that for an arbitrary time t in the future, their cyclic
302 302 * handler will have been called (t - cyt_when) / cyt_interval times. Thus,
303 303 * there must be a one-to-one mapping between a cyclic's expiration at
304 304 * CY_HIGH_LEVEL and its execution at the desired level (either CY_HIGH_LEVEL,
305 305 * CY_LOCK_LEVEL or CY_LOW_LEVEL).
306 306 *
307 307 * For CY_HIGH_LEVEL cyclics, this is trivial; cyclic_expire() simply needs
308 308 * to call the handler.
309 309 *
310 310 * For CY_LOCK_LEVEL and CY_LOW_LEVEL cyclics, however, there exists a
311 311 * potential disconnect: if the CPU is at an interrupt level less than
312 312 * CY_HIGH_LEVEL but greater than the level of a cyclic for a period of
313 313 * time longer than twice the cyclic's interval, the cyclic will be expired
314 314 * twice before it can be handled.
315 315 *
316 316 * To maintain the one-to-one mapping, we track the difference between the
317 317 * number of times a cyclic has been expired and the number of times it's
318 318 * been handled in a "pending count" (the cy_pend field of the cyclic
319 319 * structure). cyclic_expire() thus increments the cy_pend count for the
320 320 * expired cyclic and posts a soft interrupt at the desired level. In the
321 321 * cyclic subsystem's soft interrupt handler, cyclic_softint(), we repeatedly
322 322 * call the cyclic handler and decrement cy_pend until we have decremented
323 323 * cy_pend to zero.
324 324 *
325 325 * The Producer/Consumer Buffer
326 326 *
327 327 * If we wish to avoid a linear scan of the cyclics array at soft interrupt
328 328 * level, cyclic_softint() must be able to quickly determine which cyclics
329 329 * have a non-zero cy_pend count. We thus introduce a per-soft interrupt
330 330 * level producer/consumer buffer shared with CY_HIGH_LEVEL. These buffers
331 331 * are encapsulated in the cyc_pcbuffer structure, and, like cyp_heap, are
332 332 * implemented as cyc_index_t arrays (the cypc_buf member of the cyc_pcbuffer
333 333 * structure).
334 334 *
335 335 * The producer (cyclic_expire() running at CY_HIGH_LEVEL) enqueues a cyclic
336 336 * by storing the cyclic's index to cypc_buf[cypc_prodndx] and incrementing
337 337 * cypc_prodndx. The consumer (cyclic_softint() running at either
338 338 * CY_LOCK_LEVEL or CY_LOW_LEVEL) dequeues a cyclic by loading from
339 339 * cypc_buf[cypc_consndx] and bumping cypc_consndx. The buffer is empty when
340 340 * cypc_prodndx == cypc_consndx.
341 341 *
342 342 * To bound the size of the producer/consumer buffer, cyclic_expire() only
343 343 * enqueues a cyclic if its cy_pend was zero (if the cyclic's cy_pend is
344 344 * non-zero, cyclic_expire() only bumps cy_pend). Symmetrically,
345 345 * cyclic_softint() only consumes a cyclic after it has decremented the
346 346 * cy_pend count to zero.
347 347 *
348 348 * Returning to our example, here is what the CY_LOW_LEVEL producer/consumer
349 349 * buffer might look like:
350 350 *
351 351 * cypc_consndx ---+ +--- cypc_prodndx
352 352 * | |
353 353 * V V
354 354 *
355 355 * [0] [1] [2] [3] [4] [5] [6] [7]
356 356 * +-----+-----+-----+-----+-----+-----+-----+-----+
357 357 * | | | | | | | | |
358 358 * | x | x | 3 | 2 | 4 | x | x | x | <== cypc_buf
359 359 * | | | . | . | . | | | |
360 360 * +-----+-----+- | -+- | -+- | -+-----+-----+-----+
361 361 * | | |
362 362 * | | | cy_pend cy_handler
363 363 * | | | -------------------------
364 364 * | | | [ 0] 1 clock()
365 365 * | | | [ 1] 0 deadman()
366 366 * | +---- | -------> [ 2] 3 clock_highres_fire()
367 367 * +---------- | -------> [ 3] 1 clock_highres_fire()
368 368 * +--------> [ 4] 1 clock_highres_fire()
369 369 * [ 5] - (free)
370 370 * [ 6] - (free)
371 371 * [ 7] - (free)
372 372 *
373 373 * In particular, note that clock()'s cy_pend is 1 but that it is _not_ in
374 374 * this producer/consumer buffer; it would be enqueued in the CY_LOCK_LEVEL
375 375 * producer/consumer buffer.
376 376 *
377 377 * Locking
378 378 *
379 379 * Traditionally, access to per-CPU data structures shared between
380 380 * interrupt levels is serialized by manipulating programmable interrupt
381 381 * level: readers and writers are required to raise their interrupt level
382 382 * to that of the highest level writer.
383 383 *
384 384 * For the producer/consumer buffers (shared between cyclic_fire()/
385 385 * cyclic_expire() executing at CY_HIGH_LEVEL and cyclic_softint() executing
386 386 * at one of CY_LOCK_LEVEL or CY_LOW_LEVEL), forcing cyclic_softint() to raise
387 387 * programmable interrupt level is undesirable: aside from the additional
388 388 * latency incurred by manipulating interrupt level in the hot cy_pend
389 389 * processing path, this would create the potential for soft level cy_pend
390 390 * processing to delay CY_HIGH_LEVEL firing and expiry processing.
391 391 * CY_LOCK/LOW_LEVEL cyclics could thereby induce jitter in CY_HIGH_LEVEL
392 392 * cyclics.
393 393 *
394 394 * To minimize jitter, then, we would like the cyclic_fire()/cyclic_expire()
395 395 * and cyclic_softint() code paths to be lock-free.
396 396 *
397 397 * For cyclic_fire()/cyclic_expire(), lock-free execution is straightforward:
398 398 * because these routines execute at a higher interrupt level than
399 399 * cyclic_softint(), their actions on the producer/consumer buffer appear
400 400 * atomic. In particular, the increment of cy_pend appears to occur
401 401 * atomically with the increment of cypc_prodndx.
402 402 *
403 403 * For cyclic_softint(), however, lock-free execution requires more delicacy.
404 404 * When cyclic_softint() discovers a cyclic in the producer/consumer buffer,
405 405 * it calls the cyclic's handler and attempts to atomically decrement the
406 406 * cy_pend count with a compare&swap operation.
407 407 *
408 408 * If the compare&swap operation succeeds, cyclic_softint() behaves
409 409 * conditionally based on the value it atomically wrote to cy_pend:
410 410 *
411 411 * - If the cy_pend was decremented to 0, the cyclic has been consumed;
412 412 * cyclic_softint() increments the cypc_consndx and checks for more
413 413 * enqueued work.
414 414 *
415 415 * - If the count was decremented to a non-zero value, there is more work
416 416 * to be done on the cyclic; cyclic_softint() calls the cyclic handler
417 417 * and repeats the atomic decrement process.
418 418 *
419 419 * If the compare&swap operation fails, cyclic_softint() knows that
420 420 * cyclic_expire() has intervened and bumped the cy_pend count (resizes
421 421 * and removals complicate this, however -- see the sections on their
422 422 * operation, below). cyclic_softint() thus reloads cy_pend, and re-attempts
423 423 * the atomic decrement.
424 424 *
425 425 * Recall that we bound the size of the producer/consumer buffer by
426 426 * having cyclic_expire() only enqueue the specified cyclic if its
427 427 * cy_pend count is zero; this assures that each cyclic is enqueued at
428 428 * most once. This leads to a critical constraint on cyclic_softint(),
429 429 * however: after the compare&swap operation which successfully decrements
430 430 * cy_pend to zero, cyclic_softint() must _not_ re-examine the consumed
431 431 * cyclic. In part to obey this constraint, cyclic_softint() calls the
432 432 * cyclic handler before decrementing cy_pend.
433 433 *
434 434 * Resizing
435 435 *
436 436 * All of the discussion thus far has assumed a static number of cyclics.
437 437 * Obviously, static limitations are not practical; we need the capacity
438 438 * to resize our data structures dynamically.
439 439 *
440 440 * We resize our data structures lazily, and only on a per-CPU basis.
441 441 * The size of the data structures always doubles and never shrinks. We
442 442 * serialize adds (and thus resizes) on cpu_lock; we never need to deal
443 443 * with concurrent resizes. Resizes should be rare; they may induce jitter
444 444 * on the CPU being resized, but should not affect cyclic operation on other
445 445 * CPUs. Pending cyclics may not be dropped during a resize operation.
446 446 *
447 447 * Three key cyc_cpu data structures need to be resized: the cyclics array,
448 448 * the heap array and the producer/consumer buffers. Resizing the first two
449 449 * is relatively straightforward:
450 450 *
451 451 * 1. The new, larger arrays are allocated in cyclic_expand() (called
452 452 * from cyclic_add()).
453 453 * 2. cyclic_expand() cross calls cyclic_expand_xcall() on the CPU
454 454 * undergoing the resize.
455 455 * 3. cyclic_expand_xcall() raises interrupt level to CY_HIGH_LEVEL
456 456 * 4. The contents of the old arrays are copied into the new arrays.
457 457 * 5. The old cyclics array is bzero()'d
458 458 * 6. The pointers are updated.
459 459 *
460 460 * The producer/consumer buffer is dicier: cyclic_expand_xcall() may have
461 461 * interrupted cyclic_softint() in the middle of consumption. To resize the
462 462 * producer/consumer buffer, we implement up to two buffers per soft interrupt
463 463 * level: a hard buffer (the buffer being produced into by cyclic_expire())
464 464 * and a soft buffer (the buffer from which cyclic_softint() is consuming).
465 465 * During normal operation, the hard buffer and soft buffer point to the
466 466 * same underlying producer/consumer buffer.
467 467 *
468 468 * During a resize, however, cyclic_expand_xcall() changes the hard buffer
469 469 * to point to the new, larger producer/consumer buffer; all future
470 470 * cyclic_expire()'s will produce into the new buffer. cyclic_expand_xcall()
471 471 * then posts a CY_LOCK_LEVEL soft interrupt, landing in cyclic_softint().
472 472 *
473 473 * As under normal operation, cyclic_softint() will consume cyclics from
474 474 * its soft buffer. After the soft buffer is drained, however,
475 475 * cyclic_softint() will see that the hard buffer has changed. At that time,
476 476 * cyclic_softint() will change its soft buffer to point to the hard buffer,
477 477 * and repeat the producer/consumer buffer draining procedure.
478 478 *
479 479 * After the new buffer is drained, cyclic_softint() will determine if both
480 480 * soft levels have seen their new producer/consumer buffer. If both have,
481 481 * cyclic_softint() will post on the semaphore cyp_modify_wait. If not, a
482 482 * soft interrupt will be generated for the remaining level.
483 483 *
484 484 * cyclic_expand() blocks on the cyp_modify_wait semaphore (a semaphore is
485 485 * used instead of a condition variable because of the race between the
486 486 * sema_p() in cyclic_expand() and the sema_v() in cyclic_softint()). This
487 487 * allows cyclic_expand() to know when the resize operation is complete;
488 488 * all of the old buffers (the heap, the cyclics array and the producer/
489 489 * consumer buffers) can be freed.
490 490 *
491 491 * A final caveat on resizing: we described step (5) in the
492 492 * cyclic_expand_xcall() procedure without providing any motivation. This
493 493 * step addresses the problem of a cyclic_softint() attempting to decrement
494 494 * a cy_pend count while interrupted by a cyclic_expand_xcall(). Because
495 495 * cyclic_softint() has already called the handler by the time cy_pend is
496 496 * decremented, we want to assure that it doesn't decrement a cy_pend
497 497 * count in the old cyclics array. By zeroing the old cyclics array in
498 498 * cyclic_expand_xcall(), we are zeroing out every cy_pend count; when
499 499 * cyclic_softint() attempts to compare&swap on the cy_pend count, it will
500 500 * fail and recognize that the count has been zeroed. cyclic_softint() will
501 501 * update its stale copy of the cyp_cyclics pointer, re-read the cy_pend
502 502 * count from the new cyclics array, and re-attempt the compare&swap.
503 503 *
504 504 * Removals
505 505 *
506 506 * Cyclic removals should be rare. To simplify the implementation (and to
507 507 * allow optimization for the cyclic_fire()/cyclic_expire()/cyclic_softint()
508 508 * path), we force removals and adds to serialize on cpu_lock.
509 509 *
510 510 * Cyclic removal is complicated by a guarantee made to the consumer of
511 511 * the cyclic subsystem: after cyclic_remove() returns, the cyclic handler
512 512 * has returned and will never again be called.
513 513 *
514 514 * Here is the procedure for cyclic removal:
515 515 *
516 516 * 1. cyclic_remove() calls cyclic_remove_xcall() on the CPU undergoing
517 517 * the removal.
518 518 * 2. cyclic_remove_xcall() raises interrupt level to CY_HIGH_LEVEL
519 519 * 3. The current expiration time for the removed cyclic is recorded.
520 520 * 4. If the cy_pend count on the removed cyclic is non-zero, it
521 521 * is copied into cyp_rpend and subsequently zeroed.
522 522 * 5. The cyclic is removed from the heap
523 523 * 6. If the root of the heap has changed, the backend is reprogrammed.
524 524 * 7. If the cy_pend count was non-zero cyclic_remove() blocks on the
525 525 * cyp_modify_wait semaphore.
526 526 *
527 527 * The motivation for step (3) is explained in "Juggling", below.
528 528 *
529 529 * The cy_pend count is decremented in cyclic_softint() after the cyclic
530 530 * handler returns. Thus, if we find a cy_pend count of zero in step
531 531 * (4), we know that cyclic_remove() doesn't need to block.
532 532 *
533 533 * If the cy_pend count is non-zero, however, we must block in cyclic_remove()
534 534 * until cyclic_softint() has finished calling the cyclic handler. To let
535 535 * cyclic_softint() know that this cyclic has been removed, we zero the
536 536 * cy_pend count. This will cause cyclic_softint()'s compare&swap to fail.
537 537 * When cyclic_softint() sees the zero cy_pend count, it knows that it's been
538 538 * caught during a resize (see "Resizing", above) or that the cyclic has been
539 539 * removed. In the latter case, it calls cyclic_remove_pend() to call the
540 540 * cyclic handler cyp_rpend - 1 times, and posts on cyp_modify_wait.
541 541 *
542 542 * Juggling
543 543 *
544 544 * At first glance, cyclic juggling seems to be a difficult problem. The
545 545 * subsystem must guarantee that a cyclic doesn't execute simultaneously on
546 546 * different CPUs, while also assuring that a cyclic fires exactly once
547 547 * per interval. We solve this problem by leveraging a property of the
548 548 * platform: gethrtime() is required to increase in lock-step across
549 549 * multiple CPUs. Therefore, to juggle a cyclic, we remove it from its
550 550 * CPU, recording its expiration time in the remove cross call (step (3)
551 551 * in "Removing", above). We then add the cyclic to the new CPU, explicitly
552 552 * setting its expiration time to the time recorded in the removal. This
553 553 * leverages the existing cyclic expiry processing, which will compensate
554 554 * for any time lost while juggling.
555 555 *
556 556 * Reprogramming
557 557 *
558 558 * Normally, after a cyclic fires, its next expiration is computed from
559 559 * the current time and the cyclic interval. But there are situations when
560 560 * the next expiration needs to be reprogrammed by the kernel subsystem that
561 561 * is using the cyclic. cyclic_reprogram() allows this to be done. This,
562 562 * unlike the other kernel at-large cyclic API functions, is permitted to
563 563 * be called from the cyclic handler. This is because it does not use the
564 564 * cpu_lock to serialize access.
565 565 *
566 566 * When cyclic_reprogram() is called for an omni-cyclic, the operation is
567 567 * applied to the omni-cyclic's component on the current CPU.
568 568 *
569 569 * If a high-level cyclic handler reprograms its own cyclic, then
570 570 * cyclic_fire() detects that and does not recompute the cyclic's next
571 571 * expiration. However, for a lock-level or a low-level cyclic, the
572 572 * actual cyclic handler will execute at the lower PIL only after
573 573 * cyclic_fire() is done with all expired cyclics. To deal with this, such
574 574 * cyclics can be specified with a special interval of CY_INFINITY (INT64_MAX).
575 575 * cyclic_fire() recognizes this special value and recomputes the next
576 576 * expiration to CY_INFINITY. This effectively moves the cyclic to the
577 577 * bottom of the heap and prevents it from going off until its handler has
578 578 * had a chance to reprogram it. Infact, this is the way to create and reuse
579 579 * "one-shot" timers in the context of the cyclic subsystem without using
580 580 * cyclic_remove().
581 581 *
582 582 * Here is the procedure for cyclic reprogramming:
583 583 *
584 584 * 1. cyclic_reprogram() calls cyclic_reprogram_xcall() on the CPU
585 585 * that houses the cyclic.
586 586 * 2. cyclic_reprogram_xcall() raises interrupt level to CY_HIGH_LEVEL
587 587 * 3. The cyclic is located in the cyclic heap. The search for this is
588 588 * done from the bottom of the heap to the top as reprogrammable cyclics
589 589 * would be located closer to the bottom than the top.
590 590 * 4. The cyclic expiration is set and the cyclic is moved to its
591 591 * correct position in the heap (up or down depending on whether the
592 592 * new expiration is less than or greater than the old one).
593 593 * 5. If the cyclic move modified the root of the heap, the backend is
594 594 * reprogrammed.
595 595 *
596 596 * Reprogramming can be a frequent event (see the callout subsystem). So,
597 597 * the serialization used has to be efficient. As with all other cyclic
598 598 * operations, the interrupt level is raised during reprogramming. Plus,
599 599 * during reprogramming, the cyclic must not be juggled (regular cyclic)
600 600 * or stopped (omni-cyclic). The implementation defines a per-cyclic
601 601 * reader-writer lock to accomplish this. This lock is acquired in the
602 602 * reader mode by cyclic_reprogram() and writer mode by cyclic_juggle() and
603 603 * cyclic_omni_stop(). The reader-writer lock makes it efficient if
604 604 * an omni-cyclic is reprogrammed on different CPUs frequently.
605 605 *
606 606 * Note that since the cpu_lock is not used during reprogramming, it is
607 607 * the responsibility of the user of the reprogrammable cyclic to make sure
608 608 * that the cyclic is not removed via cyclic_remove() during reprogramming.
609 609 * This is not an unreasonable requirement as the user will typically have
610 610 * some sort of synchronization for its cyclic-related activities. This
611 611 * little caveat exists because the cyclic ID is not really an ID. It is
612 612 * implemented as a pointer to a structure.
613 613 */
614 614 #include <sys/cyclic_impl.h>
615 615 #include <sys/sysmacros.h>
616 616 #include <sys/systm.h>
617 617 #include <sys/atomic.h>
618 618 #include <sys/kmem.h>
619 619 #include <sys/cmn_err.h>
620 620 #include <sys/ddi.h>
621 621 #include <sys/sdt.h>
622 622
623 623 #ifdef CYCLIC_TRACE
624 624
625 625 /*
626 626 * cyc_trace_enabled is for the benefit of kernel debuggers.
627 627 */
628 628 int cyc_trace_enabled = 1;
629 629 static cyc_tracebuf_t cyc_ptrace;
630 630 static cyc_coverage_t cyc_coverage[CY_NCOVERAGE];
631 631
632 632 /*
633 633 * Seen this anywhere?
634 634 */
635 635 static uint_t
636 636 cyclic_coverage_hash(char *p)
637 637 {
638 638 unsigned int g;
639 639 uint_t hval;
640 640
641 641 hval = 0;
642 642 while (*p) {
643 643 hval = (hval << 4) + *p++;
644 644 if ((g = (hval & 0xf0000000)) != 0)
645 645 hval ^= g >> 24;
646 646 hval &= ~g;
647 647 }
648 648 return (hval);
649 649 }
650 650
↓ open down ↓ |
650 lines elided |
↑ open up ↑ |
651 651 static void
652 652 cyclic_coverage(char *why, int level, uint64_t arg0, uint64_t arg1)
653 653 {
654 654 uint_t ndx, orig;
655 655
656 656 for (ndx = orig = cyclic_coverage_hash(why) % CY_NCOVERAGE; ; ) {
657 657 if (cyc_coverage[ndx].cyv_why == why)
658 658 break;
659 659
660 660 if (cyc_coverage[ndx].cyv_why != NULL ||
661 - casptr(&cyc_coverage[ndx].cyv_why, NULL, why) != NULL) {
661 + atomic_cas_ptr(&cyc_coverage[ndx].cyv_why, NULL, why) !=
662 + NULL) {
662 663
663 664 if (++ndx == CY_NCOVERAGE)
664 665 ndx = 0;
665 666
666 667 if (ndx == orig)
667 668 panic("too many cyclic coverage points");
668 669 continue;
669 670 }
670 671
671 672 /*
672 673 * If we're here, we have successfully swung our guy into
673 674 * the position at "ndx".
674 675 */
675 676 break;
676 677 }
677 678
678 679 if (level == CY_PASSIVE_LEVEL)
679 680 cyc_coverage[ndx].cyv_passive_count++;
680 681 else
681 682 cyc_coverage[ndx].cyv_count[level]++;
682 683
683 684 cyc_coverage[ndx].cyv_arg0 = arg0;
684 685 cyc_coverage[ndx].cyv_arg1 = arg1;
685 686 }
686 687
687 688 #define CYC_TRACE(cpu, level, why, arg0, arg1) \
688 689 CYC_TRACE_IMPL(&cpu->cyp_trace[level], level, why, arg0, arg1)
689 690
690 691 #define CYC_PTRACE(why, arg0, arg1) \
691 692 CYC_TRACE_IMPL(&cyc_ptrace, CY_PASSIVE_LEVEL, why, arg0, arg1)
692 693
693 694 #define CYC_TRACE_IMPL(buf, level, why, a0, a1) { \
694 695 if (panicstr == NULL) { \
695 696 int _ndx = (buf)->cyt_ndx; \
696 697 cyc_tracerec_t *_rec = &(buf)->cyt_buf[_ndx]; \
697 698 (buf)->cyt_ndx = (++_ndx == CY_NTRACEREC) ? 0 : _ndx; \
698 699 _rec->cyt_tstamp = gethrtime_unscaled(); \
699 700 _rec->cyt_why = (why); \
700 701 _rec->cyt_arg0 = (uint64_t)(uintptr_t)(a0); \
701 702 _rec->cyt_arg1 = (uint64_t)(uintptr_t)(a1); \
702 703 cyclic_coverage(why, level, \
703 704 (uint64_t)(uintptr_t)(a0), (uint64_t)(uintptr_t)(a1)); \
704 705 } \
705 706 }
706 707
707 708 #else
708 709
709 710 static int cyc_trace_enabled = 0;
710 711
711 712 #define CYC_TRACE(cpu, level, why, arg0, arg1)
712 713 #define CYC_PTRACE(why, arg0, arg1)
713 714
714 715 #endif
715 716
716 717 #define CYC_TRACE0(cpu, level, why) CYC_TRACE(cpu, level, why, 0, 0)
717 718 #define CYC_TRACE1(cpu, level, why, arg0) CYC_TRACE(cpu, level, why, arg0, 0)
718 719
719 720 #define CYC_PTRACE0(why) CYC_PTRACE(why, 0, 0)
720 721 #define CYC_PTRACE1(why, arg0) CYC_PTRACE(why, arg0, 0)
721 722
722 723 static kmem_cache_t *cyclic_id_cache;
723 724 static cyc_id_t *cyclic_id_head;
724 725 static hrtime_t cyclic_resolution;
725 726 static cyc_backend_t cyclic_backend;
726 727
727 728 /*
728 729 * Returns 1 if the upheap propagated to the root, 0 if it did not. This
729 730 * allows the caller to reprogram the backend only when the root has been
730 731 * modified.
731 732 */
732 733 static int
733 734 cyclic_upheap(cyc_cpu_t *cpu, cyc_index_t ndx)
734 735 {
735 736 cyclic_t *cyclics;
736 737 cyc_index_t *heap;
737 738 cyc_index_t heap_parent, heap_current = ndx;
738 739 cyc_index_t parent, current;
739 740
740 741 if (heap_current == 0)
741 742 return (1);
742 743
743 744 heap = cpu->cyp_heap;
744 745 cyclics = cpu->cyp_cyclics;
745 746 heap_parent = CYC_HEAP_PARENT(heap_current);
746 747
747 748 for (;;) {
748 749 current = heap[heap_current];
749 750 parent = heap[heap_parent];
750 751
751 752 /*
752 753 * We have an expiration time later than our parent; we're
753 754 * done.
754 755 */
755 756 if (cyclics[current].cy_expire >= cyclics[parent].cy_expire)
756 757 return (0);
757 758
758 759 /*
759 760 * We need to swap with our parent, and continue up the heap.
760 761 */
761 762 heap[heap_parent] = current;
762 763 heap[heap_current] = parent;
763 764
764 765 /*
765 766 * If we just reached the root, we're done.
766 767 */
767 768 if (heap_parent == 0)
768 769 return (1);
769 770
770 771 heap_current = heap_parent;
771 772 heap_parent = CYC_HEAP_PARENT(heap_current);
772 773 }
773 774 }
774 775
775 776 static void
776 777 cyclic_downheap(cyc_cpu_t *cpu, cyc_index_t ndx)
777 778 {
778 779 cyclic_t *cyclics = cpu->cyp_cyclics;
779 780 cyc_index_t *heap = cpu->cyp_heap;
780 781
781 782 cyc_index_t heap_left, heap_right, heap_me = ndx;
782 783 cyc_index_t left, right, me;
783 784 cyc_index_t nelems = cpu->cyp_nelems;
784 785
785 786 for (;;) {
786 787 /*
787 788 * If we don't have a left child (i.e., we're a leaf), we're
788 789 * done.
789 790 */
790 791 if ((heap_left = CYC_HEAP_LEFT(heap_me)) >= nelems)
791 792 return;
792 793
793 794 left = heap[heap_left];
794 795 me = heap[heap_me];
795 796
796 797 heap_right = CYC_HEAP_RIGHT(heap_me);
797 798
798 799 /*
799 800 * Even if we don't have a right child, we still need to compare
800 801 * our expiration time against that of our left child.
801 802 */
802 803 if (heap_right >= nelems)
803 804 goto comp_left;
804 805
805 806 right = heap[heap_right];
806 807
807 808 /*
808 809 * We have both a left and a right child. We need to compare
809 810 * the expiration times of the children to determine which
810 811 * expires earlier.
811 812 */
812 813 if (cyclics[right].cy_expire < cyclics[left].cy_expire) {
813 814 /*
814 815 * Our right child is the earlier of our children.
815 816 * We'll now compare our expiration time to its; if
816 817 * ours is the earlier, we're done.
817 818 */
818 819 if (cyclics[me].cy_expire <= cyclics[right].cy_expire)
819 820 return;
820 821
821 822 /*
822 823 * Our right child expires earlier than we do; swap
823 824 * with our right child, and descend right.
824 825 */
825 826 heap[heap_right] = me;
826 827 heap[heap_me] = right;
827 828 heap_me = heap_right;
828 829 continue;
829 830 }
830 831
831 832 comp_left:
832 833 /*
833 834 * Our left child is the earlier of our children (or we have
834 835 * no right child). We'll now compare our expiration time
835 836 * to its; if ours is the earlier, we're done.
836 837 */
837 838 if (cyclics[me].cy_expire <= cyclics[left].cy_expire)
838 839 return;
839 840
840 841 /*
841 842 * Our left child expires earlier than we do; swap with our
842 843 * left child, and descend left.
843 844 */
844 845 heap[heap_left] = me;
845 846 heap[heap_me] = left;
846 847 heap_me = heap_left;
847 848 }
848 849 }
849 850
850 851 static void
851 852 cyclic_expire(cyc_cpu_t *cpu, cyc_index_t ndx, cyclic_t *cyclic)
852 853 {
853 854 cyc_backend_t *be = cpu->cyp_backend;
854 855 cyc_level_t level = cyclic->cy_level;
855 856
856 857 /*
857 858 * If this is a CY_HIGH_LEVEL cyclic, just call the handler; we don't
858 859 * need to worry about the pend count for CY_HIGH_LEVEL cyclics.
859 860 */
860 861 if (level == CY_HIGH_LEVEL) {
861 862 cyc_func_t handler = cyclic->cy_handler;
862 863 void *arg = cyclic->cy_arg;
863 864
864 865 CYC_TRACE(cpu, CY_HIGH_LEVEL, "handler-in", handler, arg);
865 866 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
866 867
867 868 (*handler)(arg);
868 869
869 870 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
870 871 CYC_TRACE(cpu, CY_HIGH_LEVEL, "handler-out", handler, arg);
871 872
872 873 return;
873 874 }
874 875
875 876 /*
876 877 * We're at CY_HIGH_LEVEL; this modification to cy_pend need not
877 878 * be atomic (the high interrupt level assures that it will appear
878 879 * atomic to any softint currently running).
879 880 */
880 881 if (cyclic->cy_pend++ == 0) {
881 882 cyc_softbuf_t *softbuf = &cpu->cyp_softbuf[level];
882 883 cyc_pcbuffer_t *pc = &softbuf->cys_buf[softbuf->cys_hard];
883 884
884 885 /*
885 886 * We need to enqueue this cyclic in the soft buffer.
886 887 */
887 888 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-enq", cyclic,
888 889 pc->cypc_prodndx);
889 890 pc->cypc_buf[pc->cypc_prodndx++ & pc->cypc_sizemask] = ndx;
890 891
891 892 ASSERT(pc->cypc_prodndx != pc->cypc_consndx);
892 893 } else {
893 894 /*
894 895 * If the pend count is zero after we incremented it, then
895 896 * we've wrapped (i.e. we had a cy_pend count of over four
896 897 * billion. In this case, we clamp the pend count at
897 898 * UINT32_MAX. Yes, cyclics can be lost in this case.
898 899 */
899 900 if (cyclic->cy_pend == 0) {
900 901 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "expire-wrap", cyclic);
901 902 cyclic->cy_pend = UINT32_MAX;
902 903 }
903 904
904 905 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expire-bump", cyclic, 0);
905 906 }
906 907
907 908 be->cyb_softint(be->cyb_arg, cyclic->cy_level);
908 909 }
909 910
910 911 /*
911 912 * cyclic_fire(cpu_t *)
912 913 *
913 914 * Overview
914 915 *
915 916 * cyclic_fire() is the cyclic subsystem's CY_HIGH_LEVEL interrupt handler.
916 917 * Called by the cyclic backend.
917 918 *
918 919 * Arguments and notes
919 920 *
920 921 * The only argument is the CPU on which the interrupt is executing;
921 922 * backends must call into cyclic_fire() on the specified CPU.
922 923 *
923 924 * cyclic_fire() may be called spuriously without ill effect. Optimal
924 925 * backends will call into cyclic_fire() at or shortly after the time
925 926 * requested via cyb_reprogram(). However, calling cyclic_fire()
926 927 * arbitrarily late will only manifest latency bubbles; the correctness
927 928 * of the cyclic subsystem does not rely on the timeliness of the backend.
928 929 *
929 930 * cyclic_fire() is wait-free; it will not block or spin.
930 931 *
931 932 * Return values
932 933 *
933 934 * None.
934 935 *
935 936 * Caller's context
936 937 *
937 938 * cyclic_fire() must be called from CY_HIGH_LEVEL interrupt context.
938 939 */
939 940 void
940 941 cyclic_fire(cpu_t *c)
941 942 {
942 943 cyc_cpu_t *cpu = c->cpu_cyclic;
943 944 cyc_backend_t *be = cpu->cyp_backend;
944 945 cyc_index_t *heap = cpu->cyp_heap;
945 946 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
946 947 void *arg = be->cyb_arg;
947 948 hrtime_t now = gethrtime();
948 949 hrtime_t exp;
949 950
950 951 CYC_TRACE(cpu, CY_HIGH_LEVEL, "fire", now, 0);
951 952
952 953 if (cpu->cyp_nelems == 0) {
953 954 /*
954 955 * This is a spurious fire. Count it as such, and blow
955 956 * out of here.
956 957 */
957 958 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "fire-spurious");
958 959 return;
959 960 }
960 961
961 962 for (;;) {
962 963 cyc_index_t ndx = heap[0];
963 964
964 965 cyclic = &cyclics[ndx];
965 966
966 967 ASSERT(!(cyclic->cy_flags & CYF_FREE));
967 968
968 969 CYC_TRACE(cpu, CY_HIGH_LEVEL, "fire-check", cyclic,
969 970 cyclic->cy_expire);
970 971
971 972 if ((exp = cyclic->cy_expire) > now)
972 973 break;
973 974
974 975 cyclic_expire(cpu, ndx, cyclic);
975 976
976 977 /*
977 978 * If the handler reprogrammed the cyclic, then don't
978 979 * recompute the expiration. Then, if the interval is
979 980 * infinity, set the expiration to infinity. This can
980 981 * be used to create one-shot timers.
981 982 */
982 983 if (exp != cyclic->cy_expire) {
983 984 /*
984 985 * If a hi level cyclic reprograms itself,
985 986 * the heap adjustment and reprogramming of the
986 987 * clock source have already been done at this
987 988 * point. So, we can continue.
988 989 */
989 990 continue;
990 991 }
991 992
992 993 if (cyclic->cy_interval == CY_INFINITY)
993 994 exp = CY_INFINITY;
994 995 else
995 996 exp += cyclic->cy_interval;
996 997
997 998 /*
998 999 * If this cyclic will be set to next expire in the distant
999 1000 * past, we have one of two situations:
1000 1001 *
1001 1002 * a) This is the first firing of a cyclic which had
1002 1003 * cy_expire set to 0.
1003 1004 *
1004 1005 * b) We are tragically late for a cyclic -- most likely
1005 1006 * due to being in the debugger.
1006 1007 *
1007 1008 * In either case, we set the new expiration time to be the
1008 1009 * the next interval boundary. This assures that the
1009 1010 * expiration time modulo the interval is invariant.
1010 1011 *
1011 1012 * We arbitrarily define "distant" to be one second (one second
1012 1013 * is chosen because it's shorter than any foray to the
1013 1014 * debugger while still being longer than any legitimate
1014 1015 * stretch at CY_HIGH_LEVEL).
1015 1016 */
1016 1017
1017 1018 if (now - exp > NANOSEC) {
1018 1019 hrtime_t interval = cyclic->cy_interval;
1019 1020
1020 1021 CYC_TRACE(cpu, CY_HIGH_LEVEL, exp == interval ?
1021 1022 "fire-first" : "fire-swing", now, exp);
1022 1023
1023 1024 exp += ((now - exp) / interval + 1) * interval;
1024 1025 }
1025 1026
1026 1027 cyclic->cy_expire = exp;
1027 1028 cyclic_downheap(cpu, 0);
1028 1029 }
1029 1030
1030 1031 /*
1031 1032 * Now we have a cyclic in the root slot which isn't in the past;
1032 1033 * reprogram the interrupt source.
1033 1034 */
1034 1035 be->cyb_reprogram(arg, exp);
1035 1036 }
1036 1037
1037 1038 static void
1038 1039 cyclic_remove_pend(cyc_cpu_t *cpu, cyc_level_t level, cyclic_t *cyclic)
1039 1040 {
1040 1041 cyc_func_t handler = cyclic->cy_handler;
1041 1042 void *arg = cyclic->cy_arg;
1042 1043 uint32_t i, rpend = cpu->cyp_rpend - 1;
1043 1044
1044 1045 ASSERT(cyclic->cy_flags & CYF_FREE);
1045 1046 ASSERT(cyclic->cy_pend == 0);
1046 1047 ASSERT(cpu->cyp_state == CYS_REMOVING);
1047 1048 ASSERT(cpu->cyp_rpend > 0);
1048 1049
1049 1050 CYC_TRACE(cpu, level, "remove-rpend", cyclic, cpu->cyp_rpend);
1050 1051
1051 1052 /*
1052 1053 * Note that we only call the handler cyp_rpend - 1 times; this is
1053 1054 * to account for the handler call in cyclic_softint().
1054 1055 */
1055 1056 for (i = 0; i < rpend; i++) {
1056 1057 CYC_TRACE(cpu, level, "rpend-in", handler, arg);
1057 1058 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1058 1059
1059 1060 (*handler)(arg);
1060 1061
1061 1062 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1062 1063 CYC_TRACE(cpu, level, "rpend-out", handler, arg);
1063 1064 }
1064 1065
1065 1066 /*
1066 1067 * We can now let the remove operation complete.
1067 1068 */
1068 1069 sema_v(&cpu->cyp_modify_wait);
1069 1070 }
1070 1071
1071 1072 /*
1072 1073 * cyclic_softint(cpu_t *cpu, cyc_level_t level)
1073 1074 *
1074 1075 * Overview
1075 1076 *
1076 1077 * cyclic_softint() is the cyclic subsystem's CY_LOCK_LEVEL and CY_LOW_LEVEL
1077 1078 * soft interrupt handler. Called by the cyclic backend.
1078 1079 *
1079 1080 * Arguments and notes
1080 1081 *
1081 1082 * The first argument to cyclic_softint() is the CPU on which the interrupt
1082 1083 * is executing; backends must call into cyclic_softint() on the specified
1083 1084 * CPU. The second argument is the level of the soft interrupt; it must
1084 1085 * be one of CY_LOCK_LEVEL or CY_LOW_LEVEL.
1085 1086 *
1086 1087 * cyclic_softint() will call the handlers for cyclics pending at the
1087 1088 * specified level. cyclic_softint() will not return until all pending
1088 1089 * cyclics at the specified level have been dealt with; intervening
1089 1090 * CY_HIGH_LEVEL interrupts which enqueue cyclics at the specified level
1090 1091 * may therefore prolong cyclic_softint().
1091 1092 *
1092 1093 * cyclic_softint() never disables interrupts, and, if neither a
1093 1094 * cyclic_add() nor a cyclic_remove() is pending on the specified CPU, is
1094 1095 * lock-free. This assures that in the common case, cyclic_softint()
1095 1096 * completes without blocking, and never starves cyclic_fire(). If either
1096 1097 * cyclic_add() or cyclic_remove() is pending, cyclic_softint() may grab
1097 1098 * a dispatcher lock.
1098 1099 *
1099 1100 * While cyclic_softint() is designed for bounded latency, it is obviously
1100 1101 * at the mercy of its cyclic handlers. Because cyclic handlers may block
1101 1102 * arbitrarily, callers of cyclic_softint() should not rely upon
1102 1103 * deterministic completion.
1103 1104 *
1104 1105 * cyclic_softint() may be called spuriously without ill effect.
1105 1106 *
1106 1107 * Return value
1107 1108 *
1108 1109 * None.
1109 1110 *
1110 1111 * Caller's context
1111 1112 *
1112 1113 * The caller must be executing in soft interrupt context at either
1113 1114 * CY_LOCK_LEVEL or CY_LOW_LEVEL. The level passed to cyclic_softint()
1114 1115 * must match the level at which it is executing. On optimal backends,
1115 1116 * the caller will hold no locks. In any case, the caller may not hold
1116 1117 * cpu_lock or any lock acquired by any cyclic handler or held across
1117 1118 * any of cyclic_add(), cyclic_remove(), cyclic_bind() or cyclic_juggle().
1118 1119 */
1119 1120 void
1120 1121 cyclic_softint(cpu_t *c, cyc_level_t level)
1121 1122 {
1122 1123 cyc_cpu_t *cpu = c->cpu_cyclic;
1123 1124 cyc_softbuf_t *softbuf;
1124 1125 int soft, *buf, consndx, resized = 0, intr_resized = 0;
1125 1126 cyc_pcbuffer_t *pc;
1126 1127 cyclic_t *cyclics = cpu->cyp_cyclics;
1127 1128 int sizemask;
1128 1129
1129 1130 CYC_TRACE(cpu, level, "softint", cyclics, 0);
1130 1131
1131 1132 ASSERT(level < CY_LOW_LEVEL + CY_SOFT_LEVELS);
1132 1133
1133 1134 softbuf = &cpu->cyp_softbuf[level];
1134 1135 top:
1135 1136 soft = softbuf->cys_soft;
1136 1137 ASSERT(soft == 0 || soft == 1);
1137 1138
1138 1139 pc = &softbuf->cys_buf[soft];
1139 1140 buf = pc->cypc_buf;
1140 1141 consndx = pc->cypc_consndx;
1141 1142 sizemask = pc->cypc_sizemask;
1142 1143
1143 1144 CYC_TRACE(cpu, level, "softint-top", cyclics, pc);
1144 1145
1145 1146 while (consndx != pc->cypc_prodndx) {
1146 1147 uint32_t pend, npend, opend;
1147 1148 int consmasked = consndx & sizemask;
1148 1149 cyclic_t *cyclic = &cyclics[buf[consmasked]];
1149 1150 cyc_func_t handler = cyclic->cy_handler;
1150 1151 void *arg = cyclic->cy_arg;
1151 1152
1152 1153 ASSERT(buf[consmasked] < cpu->cyp_size);
1153 1154 CYC_TRACE(cpu, level, "consuming", consndx, cyclic);
1154 1155
1155 1156 /*
1156 1157 * We have found this cyclic in the pcbuffer. We know that
1157 1158 * one of the following is true:
1158 1159 *
1159 1160 * (a) The pend is non-zero. We need to execute the handler
1160 1161 * at least once.
1161 1162 *
1162 1163 * (b) The pend _was_ non-zero, but it's now zero due to a
1163 1164 * resize. We will call the handler once, see that we
1164 1165 * are in this case, and read the new cyclics buffer
1165 1166 * (and hence the old non-zero pend).
1166 1167 *
1167 1168 * (c) The pend _was_ non-zero, but it's now zero due to a
1168 1169 * removal. We will call the handler once, see that we
1169 1170 * are in this case, and call into cyclic_remove_pend()
1170 1171 * to call the cyclic rpend times. We will take into
1171 1172 * account that we have already called the handler once.
1172 1173 *
1173 1174 * Point is: it's safe to call the handler without first
1174 1175 * checking the pend.
1175 1176 */
1176 1177 do {
1177 1178 CYC_TRACE(cpu, level, "handler-in", handler, arg);
1178 1179 DTRACE_PROBE1(cyclic__start, cyclic_t *, cyclic);
1179 1180
1180 1181 (*handler)(arg);
1181 1182
1182 1183 DTRACE_PROBE1(cyclic__end, cyclic_t *, cyclic);
1183 1184 CYC_TRACE(cpu, level, "handler-out", handler, arg);
1184 1185 reread:
1185 1186 pend = cyclic->cy_pend;
1186 1187 npend = pend - 1;
1187 1188
1188 1189 if (pend == 0) {
1189 1190 if (cpu->cyp_state == CYS_REMOVING) {
1190 1191 /*
1191 1192 * This cyclic has been removed while
1192 1193 * it had a non-zero pend count (we
1193 1194 * know it was non-zero because we
1194 1195 * found this cyclic in the pcbuffer).
1195 1196 * There must be a non-zero rpend for
1196 1197 * this CPU, and there must be a remove
1197 1198 * operation blocking; we'll call into
1198 1199 * cyclic_remove_pend() to clean this
1199 1200 * up, and break out of the pend loop.
1200 1201 */
1201 1202 cyclic_remove_pend(cpu, level, cyclic);
1202 1203 break;
1203 1204 }
1204 1205
1205 1206 /*
1206 1207 * We must have had a resize interrupt us.
1207 1208 */
1208 1209 CYC_TRACE(cpu, level, "resize-int", cyclics, 0);
1209 1210 ASSERT(cpu->cyp_state == CYS_EXPANDING);
1210 1211 ASSERT(cyclics != cpu->cyp_cyclics);
1211 1212 ASSERT(resized == 0);
↓ open down ↓ |
540 lines elided |
↑ open up ↑ |
1212 1213 ASSERT(intr_resized == 0);
1213 1214 intr_resized = 1;
1214 1215 cyclics = cpu->cyp_cyclics;
1215 1216 cyclic = &cyclics[buf[consmasked]];
1216 1217 ASSERT(cyclic->cy_handler == handler);
1217 1218 ASSERT(cyclic->cy_arg == arg);
1218 1219 goto reread;
1219 1220 }
1220 1221
1221 1222 if ((opend =
1222 - cas32(&cyclic->cy_pend, pend, npend)) != pend) {
1223 + atomic_cas_32(&cyclic->cy_pend, pend, npend)) !=
1224 + pend) {
1223 1225 /*
1224 - * Our cas32 can fail for one of several
1226 + * Our atomic_cas_32 can fail for one of several
1225 1227 * reasons:
1226 1228 *
1227 1229 * (a) An intervening high level bumped up the
1228 1230 * pend count on this cyclic. In this
1229 1231 * case, we will see a higher pend.
1230 1232 *
1231 1233 * (b) The cyclics array has been yanked out
1232 1234 * from underneath us by a resize
1233 1235 * operation. In this case, pend is 0 and
1234 1236 * cyp_state is CYS_EXPANDING.
1235 1237 *
1236 1238 * (c) The cyclic has been removed by an
1237 1239 * intervening remove-xcall. In this case,
1238 1240 * pend will be 0, the cyp_state will be
1239 1241 * CYS_REMOVING, and the cyclic will be
1240 1242 * marked CYF_FREE.
1241 1243 *
1242 1244 * The assertion below checks that we are
1243 1245 * in one of the above situations. The
1244 1246 * action under all three is to return to
1245 1247 * the top of the loop.
1246 1248 */
1247 1249 CYC_TRACE(cpu, level, "cas-fail", opend, pend);
1248 1250 ASSERT(opend > pend || (opend == 0 &&
1249 1251 ((cyclics != cpu->cyp_cyclics &&
1250 1252 cpu->cyp_state == CYS_EXPANDING) ||
1251 1253 (cpu->cyp_state == CYS_REMOVING &&
1252 1254 (cyclic->cy_flags & CYF_FREE)))));
1253 1255 goto reread;
1254 1256 }
1255 1257
1256 1258 /*
1257 1259 * Okay, so we've managed to successfully decrement
1258 1260 * pend. If we just decremented the pend to 0, we're
1259 1261 * done.
1260 1262 */
1261 1263 } while (npend > 0);
1262 1264
1263 1265 pc->cypc_consndx = ++consndx;
1264 1266 }
1265 1267
1266 1268 /*
1267 1269 * If the high level handler is no longer writing to the same
1268 1270 * buffer, then we've had a resize. We need to switch our soft
1269 1271 * index, and goto top.
1270 1272 */
1271 1273 if (soft != softbuf->cys_hard) {
1272 1274 /*
1273 1275 * We can assert that the other buffer has grown by exactly
1274 1276 * one factor of two.
1275 1277 */
1276 1278 CYC_TRACE(cpu, level, "buffer-grow", 0, 0);
1277 1279 ASSERT(cpu->cyp_state == CYS_EXPANDING);
1278 1280 ASSERT(softbuf->cys_buf[softbuf->cys_hard].cypc_sizemask ==
1279 1281 (softbuf->cys_buf[soft].cypc_sizemask << 1) + 1 ||
1280 1282 softbuf->cys_buf[soft].cypc_sizemask == 0);
1281 1283 ASSERT(softbuf->cys_hard == (softbuf->cys_soft ^ 1));
1282 1284
1283 1285 /*
1284 1286 * If our cached cyclics pointer doesn't match cyp_cyclics,
1285 1287 * then we took a resize between our last iteration of the
1286 1288 * pend loop and the check against softbuf->cys_hard.
1287 1289 */
1288 1290 if (cpu->cyp_cyclics != cyclics) {
1289 1291 CYC_TRACE1(cpu, level, "resize-int-int", consndx);
1290 1292 cyclics = cpu->cyp_cyclics;
1291 1293 }
1292 1294
1293 1295 softbuf->cys_soft = softbuf->cys_hard;
1294 1296
1295 1297 ASSERT(resized == 0);
1296 1298 resized = 1;
1297 1299 goto top;
1298 1300 }
1299 1301
1300 1302 /*
1301 1303 * If we were interrupted by a resize operation, then we must have
1302 1304 * seen the hard index change.
1303 1305 */
↓ open down ↓ |
69 lines elided |
↑ open up ↑ |
1304 1306 ASSERT(!(intr_resized == 1 && resized == 0));
1305 1307
1306 1308 if (resized) {
1307 1309 uint32_t lev, nlev;
1308 1310
1309 1311 ASSERT(cpu->cyp_state == CYS_EXPANDING);
1310 1312
1311 1313 do {
1312 1314 lev = cpu->cyp_modify_levels;
1313 1315 nlev = lev + 1;
1314 - } while (cas32(&cpu->cyp_modify_levels, lev, nlev) != lev);
1316 + } while (atomic_cas_32(&cpu->cyp_modify_levels, lev, nlev) !=
1317 + lev);
1315 1318
1316 1319 /*
1317 1320 * If we are the last soft level to see the modification,
1318 1321 * post on cyp_modify_wait. Otherwise, (if we're not
1319 1322 * already at low level), post down to the next soft level.
1320 1323 */
1321 1324 if (nlev == CY_SOFT_LEVELS) {
1322 1325 CYC_TRACE0(cpu, level, "resize-kick");
1323 1326 sema_v(&cpu->cyp_modify_wait);
1324 1327 } else {
1325 1328 ASSERT(nlev < CY_SOFT_LEVELS);
1326 1329 if (level != CY_LOW_LEVEL) {
1327 1330 cyc_backend_t *be = cpu->cyp_backend;
1328 1331
1329 1332 CYC_TRACE0(cpu, level, "resize-post");
1330 1333 be->cyb_softint(be->cyb_arg, level - 1);
1331 1334 }
1332 1335 }
1333 1336 }
1334 1337 }
1335 1338
1336 1339 static void
1337 1340 cyclic_expand_xcall(cyc_xcallarg_t *arg)
1338 1341 {
1339 1342 cyc_cpu_t *cpu = arg->cyx_cpu;
1340 1343 cyc_backend_t *be = cpu->cyp_backend;
1341 1344 cyb_arg_t bar = be->cyb_arg;
1342 1345 cyc_cookie_t cookie;
1343 1346 cyc_index_t new_size = arg->cyx_size, size = cpu->cyp_size, i;
1344 1347 cyc_index_t *new_heap = arg->cyx_heap;
1345 1348 cyclic_t *cyclics = cpu->cyp_cyclics, *new_cyclics = arg->cyx_cyclics;
1346 1349
1347 1350 ASSERT(cpu->cyp_state == CYS_EXPANDING);
1348 1351
1349 1352 /*
1350 1353 * This is a little dicey. First, we'll raise our interrupt level
1351 1354 * to CY_HIGH_LEVEL. This CPU already has a new heap, cyclic array,
1352 1355 * etc.; we just need to bcopy them across. As for the softint
1353 1356 * buffers, we'll switch the active buffers. The actual softints will
1354 1357 * take care of consuming any pending cyclics in the old buffer.
1355 1358 */
1356 1359 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1357 1360
1358 1361 CYC_TRACE(cpu, CY_HIGH_LEVEL, "expand", new_size, 0);
1359 1362
1360 1363 /*
1361 1364 * Assert that the new size is a power of 2.
1362 1365 */
1363 1366 ASSERT((new_size & new_size - 1) == 0);
1364 1367 ASSERT(new_size == (size << 1));
1365 1368 ASSERT(cpu->cyp_heap != NULL && cpu->cyp_cyclics != NULL);
1366 1369
1367 1370 bcopy(cpu->cyp_heap, new_heap, sizeof (cyc_index_t) * size);
1368 1371 bcopy(cyclics, new_cyclics, sizeof (cyclic_t) * size);
1369 1372
1370 1373 /*
1371 1374 * Now run through the old cyclics array, setting pend to 0. To
1372 1375 * softints (which are executing at a lower priority level), the
1373 1376 * pends dropping to 0 will appear atomic with the cyp_cyclics
1374 1377 * pointer changing.
1375 1378 */
1376 1379 for (i = 0; i < size; i++)
1377 1380 cyclics[i].cy_pend = 0;
1378 1381
1379 1382 /*
1380 1383 * Set up the free list, and set all of the new cyclics to be CYF_FREE.
1381 1384 */
1382 1385 for (i = size; i < new_size; i++) {
1383 1386 new_heap[i] = i;
1384 1387 new_cyclics[i].cy_flags = CYF_FREE;
1385 1388 }
1386 1389
1387 1390 /*
1388 1391 * We can go ahead and plow the value of cyp_heap and cyp_cyclics;
1389 1392 * cyclic_expand() has kept a copy.
1390 1393 */
1391 1394 cpu->cyp_heap = new_heap;
1392 1395 cpu->cyp_cyclics = new_cyclics;
1393 1396 cpu->cyp_size = new_size;
1394 1397
1395 1398 /*
1396 1399 * We've switched over the heap and the cyclics array. Now we need
1397 1400 * to switch over our active softint buffer pointers.
1398 1401 */
1399 1402 for (i = CY_LOW_LEVEL; i < CY_LOW_LEVEL + CY_SOFT_LEVELS; i++) {
1400 1403 cyc_softbuf_t *softbuf = &cpu->cyp_softbuf[i];
1401 1404 uchar_t hard = softbuf->cys_hard;
1402 1405
1403 1406 /*
1404 1407 * Assert that we're not in the middle of a resize operation.
1405 1408 */
1406 1409 ASSERT(hard == softbuf->cys_soft);
1407 1410 ASSERT(hard == 0 || hard == 1);
1408 1411 ASSERT(softbuf->cys_buf[hard].cypc_buf != NULL);
1409 1412
1410 1413 softbuf->cys_hard = hard ^ 1;
1411 1414
1412 1415 /*
1413 1416 * The caller (cyclic_expand()) is responsible for setting
1414 1417 * up the new producer-consumer buffer; assert that it's
1415 1418 * been done correctly.
1416 1419 */
1417 1420 ASSERT(softbuf->cys_buf[hard ^ 1].cypc_buf != NULL);
1418 1421 ASSERT(softbuf->cys_buf[hard ^ 1].cypc_prodndx == 0);
1419 1422 ASSERT(softbuf->cys_buf[hard ^ 1].cypc_consndx == 0);
1420 1423 }
1421 1424
1422 1425 /*
1423 1426 * That's all there is to it; now we just need to postdown to
1424 1427 * get the softint chain going.
1425 1428 */
1426 1429 be->cyb_softint(bar, CY_HIGH_LEVEL - 1);
1427 1430 be->cyb_restore_level(bar, cookie);
1428 1431 }
1429 1432
1430 1433 /*
1431 1434 * cyclic_expand() will cross call onto the CPU to perform the actual
1432 1435 * expand operation.
1433 1436 */
1434 1437 static void
1435 1438 cyclic_expand(cyc_cpu_t *cpu)
1436 1439 {
1437 1440 cyc_index_t new_size, old_size;
1438 1441 cyc_index_t *new_heap, *old_heap;
1439 1442 cyclic_t *new_cyclics, *old_cyclics;
1440 1443 cyc_xcallarg_t arg;
1441 1444 cyc_backend_t *be = cpu->cyp_backend;
1442 1445 char old_hard;
1443 1446 int i;
1444 1447
1445 1448 ASSERT(MUTEX_HELD(&cpu_lock));
1446 1449 ASSERT(cpu->cyp_state == CYS_ONLINE);
1447 1450
1448 1451 cpu->cyp_state = CYS_EXPANDING;
1449 1452
1450 1453 old_heap = cpu->cyp_heap;
1451 1454 old_cyclics = cpu->cyp_cyclics;
1452 1455
1453 1456 if ((new_size = ((old_size = cpu->cyp_size) << 1)) == 0) {
1454 1457 new_size = CY_DEFAULT_PERCPU;
1455 1458 ASSERT(old_heap == NULL && old_cyclics == NULL);
1456 1459 }
1457 1460
1458 1461 /*
1459 1462 * Check that the new_size is a power of 2.
1460 1463 */
1461 1464 ASSERT((new_size - 1 & new_size) == 0);
1462 1465
1463 1466 new_heap = kmem_alloc(sizeof (cyc_index_t) * new_size, KM_SLEEP);
1464 1467 new_cyclics = kmem_zalloc(sizeof (cyclic_t) * new_size, KM_SLEEP);
1465 1468
1466 1469 /*
1467 1470 * We know that no other expansions are in progress (they serialize
1468 1471 * on cpu_lock), so we can safely read the softbuf metadata.
1469 1472 */
1470 1473 old_hard = cpu->cyp_softbuf[0].cys_hard;
1471 1474
1472 1475 for (i = CY_LOW_LEVEL; i < CY_LOW_LEVEL + CY_SOFT_LEVELS; i++) {
1473 1476 cyc_softbuf_t *softbuf = &cpu->cyp_softbuf[i];
1474 1477 char hard = softbuf->cys_hard;
1475 1478 cyc_pcbuffer_t *pc = &softbuf->cys_buf[hard ^ 1];
1476 1479
1477 1480 ASSERT(hard == old_hard);
1478 1481 ASSERT(hard == softbuf->cys_soft);
1479 1482 ASSERT(pc->cypc_buf == NULL);
1480 1483
1481 1484 pc->cypc_buf =
1482 1485 kmem_alloc(sizeof (cyc_index_t) * new_size, KM_SLEEP);
1483 1486 pc->cypc_prodndx = pc->cypc_consndx = 0;
1484 1487 pc->cypc_sizemask = new_size - 1;
1485 1488 }
1486 1489
1487 1490 arg.cyx_cpu = cpu;
1488 1491 arg.cyx_heap = new_heap;
1489 1492 arg.cyx_cyclics = new_cyclics;
1490 1493 arg.cyx_size = new_size;
1491 1494
1492 1495 cpu->cyp_modify_levels = 0;
1493 1496
1494 1497 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
1495 1498 (cyc_func_t)cyclic_expand_xcall, &arg);
1496 1499
1497 1500 /*
1498 1501 * Now block, waiting for the resize operation to complete.
1499 1502 */
1500 1503 sema_p(&cpu->cyp_modify_wait);
1501 1504 ASSERT(cpu->cyp_modify_levels == CY_SOFT_LEVELS);
1502 1505
1503 1506 /*
1504 1507 * The operation is complete; we can now free the old buffers.
1505 1508 */
1506 1509 for (i = CY_LOW_LEVEL; i < CY_LOW_LEVEL + CY_SOFT_LEVELS; i++) {
1507 1510 cyc_softbuf_t *softbuf = &cpu->cyp_softbuf[i];
1508 1511 char hard = softbuf->cys_hard;
1509 1512 cyc_pcbuffer_t *pc = &softbuf->cys_buf[hard ^ 1];
1510 1513
1511 1514 ASSERT(hard == (old_hard ^ 1));
1512 1515 ASSERT(hard == softbuf->cys_soft);
1513 1516
1514 1517 if (pc->cypc_buf == NULL)
1515 1518 continue;
1516 1519
1517 1520 ASSERT(pc->cypc_sizemask == ((new_size - 1) >> 1));
1518 1521
1519 1522 kmem_free(pc->cypc_buf,
1520 1523 sizeof (cyc_index_t) * (pc->cypc_sizemask + 1));
1521 1524 pc->cypc_buf = NULL;
1522 1525 }
1523 1526
1524 1527 if (old_cyclics != NULL) {
1525 1528 ASSERT(old_heap != NULL);
1526 1529 ASSERT(old_size != 0);
1527 1530 kmem_free(old_cyclics, sizeof (cyclic_t) * old_size);
1528 1531 kmem_free(old_heap, sizeof (cyc_index_t) * old_size);
1529 1532 }
1530 1533
1531 1534 ASSERT(cpu->cyp_state == CYS_EXPANDING);
1532 1535 cpu->cyp_state = CYS_ONLINE;
1533 1536 }
1534 1537
1535 1538 /*
1536 1539 * cyclic_pick_cpu will attempt to pick a CPU according to the constraints
1537 1540 * specified by the partition, bound CPU, and flags. Additionally,
1538 1541 * cyclic_pick_cpu() will not pick the avoid CPU; it will return NULL if
1539 1542 * the avoid CPU is the only CPU which satisfies the constraints.
1540 1543 *
1541 1544 * If CYF_CPU_BOUND is set in flags, the specified CPU must be non-NULL.
1542 1545 * If CYF_PART_BOUND is set in flags, the specified partition must be non-NULL.
1543 1546 * If both CYF_CPU_BOUND and CYF_PART_BOUND are set, the specified CPU must
1544 1547 * be in the specified partition.
1545 1548 */
1546 1549 static cyc_cpu_t *
1547 1550 cyclic_pick_cpu(cpupart_t *part, cpu_t *bound, cpu_t *avoid, uint16_t flags)
1548 1551 {
1549 1552 cpu_t *c, *start = (part != NULL) ? part->cp_cpulist : CPU;
1550 1553 cpu_t *online = NULL;
1551 1554 uintptr_t offset;
1552 1555
1553 1556 CYC_PTRACE("pick-cpu", part, bound);
1554 1557
1555 1558 ASSERT(!(flags & CYF_CPU_BOUND) || bound != NULL);
1556 1559 ASSERT(!(flags & CYF_PART_BOUND) || part != NULL);
1557 1560
1558 1561 /*
1559 1562 * If we're bound to our CPU, there isn't much choice involved. We
1560 1563 * need to check that the CPU passed as bound is in the cpupart, and
1561 1564 * that the CPU that we're binding to has been configured.
1562 1565 */
1563 1566 if (flags & CYF_CPU_BOUND) {
1564 1567 CYC_PTRACE("pick-cpu-bound", bound, avoid);
1565 1568
1566 1569 if ((flags & CYF_PART_BOUND) && bound->cpu_part != part)
1567 1570 panic("cyclic_pick_cpu: "
1568 1571 "CPU binding contradicts partition binding");
1569 1572
1570 1573 if (bound == avoid)
1571 1574 return (NULL);
1572 1575
1573 1576 if (bound->cpu_cyclic == NULL)
1574 1577 panic("cyclic_pick_cpu: "
1575 1578 "attempt to bind to non-configured CPU");
1576 1579
1577 1580 return (bound->cpu_cyclic);
1578 1581 }
1579 1582
1580 1583 if (flags & CYF_PART_BOUND) {
1581 1584 CYC_PTRACE("pick-part-bound", bound, avoid);
1582 1585 offset = offsetof(cpu_t, cpu_next_part);
1583 1586 } else {
1584 1587 offset = offsetof(cpu_t, cpu_next_onln);
1585 1588 }
1586 1589
1587 1590 c = start;
1588 1591 do {
1589 1592 if (c->cpu_cyclic == NULL)
1590 1593 continue;
1591 1594
1592 1595 if (c->cpu_cyclic->cyp_state == CYS_OFFLINE)
1593 1596 continue;
1594 1597
1595 1598 if (c == avoid)
1596 1599 continue;
1597 1600
1598 1601 if (c->cpu_flags & CPU_ENABLE)
1599 1602 goto found;
1600 1603
1601 1604 if (online == NULL)
1602 1605 online = c;
1603 1606 } while ((c = *(cpu_t **)((uintptr_t)c + offset)) != start);
1604 1607
1605 1608 /*
1606 1609 * If we're here, we're in one of two situations:
1607 1610 *
1608 1611 * (a) We have a partition-bound cyclic, and there is no CPU in
1609 1612 * our partition which is CPU_ENABLE'd. If we saw another
1610 1613 * non-CYS_OFFLINE CPU in our partition, we'll go with it.
1611 1614 * If not, the avoid CPU must be the only non-CYS_OFFLINE
1612 1615 * CPU in the partition; we're forced to return NULL.
1613 1616 *
1614 1617 * (b) We have a partition-unbound cyclic, in which case there
1615 1618 * must only be one CPU CPU_ENABLE'd, and it must be the one
1616 1619 * we're trying to avoid. If cyclic_juggle()/cyclic_offline()
1617 1620 * are called appropriately, this generally shouldn't happen
1618 1621 * (the offline should fail before getting to this code).
1619 1622 * At any rate: we can't avoid the avoid CPU, so we return
1620 1623 * NULL.
1621 1624 */
1622 1625 if (!(flags & CYF_PART_BOUND)) {
1623 1626 ASSERT(avoid->cpu_flags & CPU_ENABLE);
1624 1627 return (NULL);
1625 1628 }
1626 1629
1627 1630 CYC_PTRACE("pick-no-intr", part, avoid);
1628 1631
1629 1632 if ((c = online) != NULL)
1630 1633 goto found;
1631 1634
1632 1635 CYC_PTRACE("pick-fail", part, avoid);
1633 1636 ASSERT(avoid->cpu_part == start->cpu_part);
1634 1637 return (NULL);
1635 1638
1636 1639 found:
1637 1640 CYC_PTRACE("pick-cpu-found", c, avoid);
1638 1641 ASSERT(c != avoid);
1639 1642 ASSERT(c->cpu_cyclic != NULL);
1640 1643
1641 1644 return (c->cpu_cyclic);
1642 1645 }
1643 1646
1644 1647 static void
1645 1648 cyclic_add_xcall(cyc_xcallarg_t *arg)
1646 1649 {
1647 1650 cyc_cpu_t *cpu = arg->cyx_cpu;
1648 1651 cyc_handler_t *hdlr = arg->cyx_hdlr;
1649 1652 cyc_time_t *when = arg->cyx_when;
1650 1653 cyc_backend_t *be = cpu->cyp_backend;
1651 1654 cyc_index_t ndx, nelems;
1652 1655 cyc_cookie_t cookie;
1653 1656 cyb_arg_t bar = be->cyb_arg;
1654 1657 cyclic_t *cyclic;
1655 1658
1656 1659 ASSERT(cpu->cyp_nelems < cpu->cyp_size);
1657 1660
1658 1661 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1659 1662
1660 1663 CYC_TRACE(cpu, CY_HIGH_LEVEL,
1661 1664 "add-xcall", when->cyt_when, when->cyt_interval);
1662 1665
1663 1666 nelems = cpu->cyp_nelems++;
1664 1667
1665 1668 if (nelems == 0) {
1666 1669 /*
1667 1670 * If this is the first element, we need to enable the
1668 1671 * backend on this CPU.
1669 1672 */
1670 1673 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "enabled");
1671 1674 be->cyb_enable(bar);
1672 1675 }
1673 1676
1674 1677 ndx = cpu->cyp_heap[nelems];
1675 1678 cyclic = &cpu->cyp_cyclics[ndx];
1676 1679
1677 1680 ASSERT(cyclic->cy_flags == CYF_FREE);
1678 1681 cyclic->cy_interval = when->cyt_interval;
1679 1682
1680 1683 if (when->cyt_when == 0) {
1681 1684 /*
1682 1685 * If a start time hasn't been explicitly specified, we'll
1683 1686 * start on the next interval boundary.
1684 1687 */
1685 1688 cyclic->cy_expire = (gethrtime() / cyclic->cy_interval + 1) *
1686 1689 cyclic->cy_interval;
1687 1690 } else {
1688 1691 cyclic->cy_expire = when->cyt_when;
1689 1692 }
1690 1693
1691 1694 cyclic->cy_handler = hdlr->cyh_func;
1692 1695 cyclic->cy_arg = hdlr->cyh_arg;
1693 1696 cyclic->cy_level = hdlr->cyh_level;
1694 1697 cyclic->cy_flags = arg->cyx_flags;
1695 1698
1696 1699 if (cyclic_upheap(cpu, nelems)) {
1697 1700 hrtime_t exp = cyclic->cy_expire;
1698 1701
1699 1702 CYC_TRACE(cpu, CY_HIGH_LEVEL, "add-reprog", cyclic, exp);
1700 1703
1701 1704 /*
1702 1705 * If our upheap propagated to the root, we need to
1703 1706 * reprogram the interrupt source.
1704 1707 */
1705 1708 be->cyb_reprogram(bar, exp);
1706 1709 }
1707 1710 be->cyb_restore_level(bar, cookie);
1708 1711
1709 1712 arg->cyx_ndx = ndx;
1710 1713 }
1711 1714
1712 1715 static cyc_index_t
1713 1716 cyclic_add_here(cyc_cpu_t *cpu, cyc_handler_t *hdlr,
1714 1717 cyc_time_t *when, uint16_t flags)
1715 1718 {
1716 1719 cyc_backend_t *be = cpu->cyp_backend;
1717 1720 cyb_arg_t bar = be->cyb_arg;
1718 1721 cyc_xcallarg_t arg;
1719 1722
1720 1723 CYC_PTRACE("add-cpu", cpu, hdlr->cyh_func);
1721 1724 ASSERT(MUTEX_HELD(&cpu_lock));
1722 1725 ASSERT(cpu->cyp_state == CYS_ONLINE);
1723 1726 ASSERT(!(cpu->cyp_cpu->cpu_flags & CPU_OFFLINE));
1724 1727 ASSERT(when->cyt_when >= 0 && when->cyt_interval > 0);
1725 1728
1726 1729 if (cpu->cyp_nelems == cpu->cyp_size) {
1727 1730 /*
1728 1731 * This is expensive; it will cross call onto the other
1729 1732 * CPU to perform the expansion.
1730 1733 */
1731 1734 cyclic_expand(cpu);
1732 1735 ASSERT(cpu->cyp_nelems < cpu->cyp_size);
1733 1736 }
1734 1737
1735 1738 /*
1736 1739 * By now, we know that we're going to be able to successfully
1737 1740 * perform the add. Now cross call over to the CPU of interest to
1738 1741 * actually add our cyclic.
1739 1742 */
1740 1743 arg.cyx_cpu = cpu;
1741 1744 arg.cyx_hdlr = hdlr;
1742 1745 arg.cyx_when = when;
1743 1746 arg.cyx_flags = flags;
1744 1747
1745 1748 be->cyb_xcall(bar, cpu->cyp_cpu, (cyc_func_t)cyclic_add_xcall, &arg);
1746 1749
1747 1750 CYC_PTRACE("add-cpu-done", cpu, arg.cyx_ndx);
1748 1751
1749 1752 return (arg.cyx_ndx);
1750 1753 }
1751 1754
1752 1755 static void
1753 1756 cyclic_remove_xcall(cyc_xcallarg_t *arg)
1754 1757 {
1755 1758 cyc_cpu_t *cpu = arg->cyx_cpu;
1756 1759 cyc_backend_t *be = cpu->cyp_backend;
1757 1760 cyb_arg_t bar = be->cyb_arg;
1758 1761 cyc_cookie_t cookie;
1759 1762 cyc_index_t ndx = arg->cyx_ndx, nelems, i;
1760 1763 cyc_index_t *heap, last;
1761 1764 cyclic_t *cyclic;
1762 1765 #ifdef DEBUG
1763 1766 cyc_index_t root;
1764 1767 #endif
1765 1768
1766 1769 ASSERT(cpu->cyp_state == CYS_REMOVING);
1767 1770
1768 1771 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1769 1772
1770 1773 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "remove-xcall", ndx);
1771 1774
1772 1775 heap = cpu->cyp_heap;
1773 1776 nelems = cpu->cyp_nelems;
1774 1777 ASSERT(nelems > 0);
1775 1778 cyclic = &cpu->cyp_cyclics[ndx];
1776 1779
1777 1780 /*
1778 1781 * Grab the current expiration time. If this cyclic is being
1779 1782 * removed as part of a juggling operation, the expiration time
1780 1783 * will be used when the cyclic is added to the new CPU.
1781 1784 */
1782 1785 if (arg->cyx_when != NULL) {
1783 1786 arg->cyx_when->cyt_when = cyclic->cy_expire;
1784 1787 arg->cyx_when->cyt_interval = cyclic->cy_interval;
1785 1788 }
1786 1789
1787 1790 if (cyclic->cy_pend != 0) {
1788 1791 /*
1789 1792 * The pend is non-zero; this cyclic is currently being
1790 1793 * executed (or will be executed shortly). If the caller
1791 1794 * refuses to wait, we must return (doing nothing). Otherwise,
1792 1795 * we will stash the pend value * in this CPU's rpend, and
1793 1796 * then zero it out. The softint in the pend loop will see
1794 1797 * that we have zeroed out pend, and will call the cyclic
1795 1798 * handler rpend times. The caller will wait until the
1796 1799 * softint has completed calling the cyclic handler.
1797 1800 */
1798 1801 if (arg->cyx_wait == CY_NOWAIT) {
1799 1802 arg->cyx_wait = CY_WAIT;
1800 1803 goto out;
1801 1804 }
1802 1805
1803 1806 ASSERT(cyclic->cy_level != CY_HIGH_LEVEL);
1804 1807 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "remove-pend", cyclic->cy_pend);
1805 1808 cpu->cyp_rpend = cyclic->cy_pend;
1806 1809 cyclic->cy_pend = 0;
1807 1810 }
1808 1811
1809 1812 /*
1810 1813 * Now set the flags to CYF_FREE. We don't need a membar_enter()
1811 1814 * between zeroing pend and setting the flags because we're at
1812 1815 * CY_HIGH_LEVEL (that is, the zeroing of pend and the setting
1813 1816 * of cy_flags appear atomic to softints).
1814 1817 */
1815 1818 cyclic->cy_flags = CYF_FREE;
1816 1819
1817 1820 for (i = 0; i < nelems; i++) {
1818 1821 if (heap[i] == ndx)
1819 1822 break;
1820 1823 }
1821 1824
1822 1825 if (i == nelems)
1823 1826 panic("attempt to remove non-existent cyclic");
1824 1827
1825 1828 cpu->cyp_nelems = --nelems;
1826 1829
1827 1830 if (nelems == 0) {
1828 1831 /*
1829 1832 * If we just removed the last element, then we need to
1830 1833 * disable the backend on this CPU.
1831 1834 */
1832 1835 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "disabled");
1833 1836 be->cyb_disable(bar);
1834 1837 }
1835 1838
1836 1839 if (i == nelems) {
1837 1840 /*
1838 1841 * If we just removed the last element of the heap, then
1839 1842 * we don't have to downheap.
1840 1843 */
1841 1844 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "remove-bottom");
1842 1845 goto out;
1843 1846 }
1844 1847
1845 1848 #ifdef DEBUG
1846 1849 root = heap[0];
1847 1850 #endif
1848 1851
1849 1852 /*
1850 1853 * Swap the last element of the heap with the one we want to
1851 1854 * remove, and downheap (this has the implicit effect of putting
1852 1855 * the newly freed element on the free list).
1853 1856 */
1854 1857 heap[i] = (last = heap[nelems]);
1855 1858 heap[nelems] = ndx;
1856 1859
1857 1860 if (i == 0) {
1858 1861 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "remove-root");
1859 1862 cyclic_downheap(cpu, 0);
1860 1863 } else {
1861 1864 if (cyclic_upheap(cpu, i) == 0) {
1862 1865 /*
1863 1866 * The upheap didn't propagate to the root; if it
1864 1867 * didn't propagate at all, we need to downheap.
1865 1868 */
1866 1869 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "remove-no-root");
1867 1870 if (heap[i] == last) {
1868 1871 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "remove-no-up");
1869 1872 cyclic_downheap(cpu, i);
1870 1873 }
1871 1874 ASSERT(heap[0] == root);
1872 1875 goto out;
1873 1876 }
1874 1877 }
1875 1878
1876 1879 /*
1877 1880 * We're here because we changed the root; we need to reprogram
1878 1881 * the clock source.
1879 1882 */
1880 1883 cyclic = &cpu->cyp_cyclics[heap[0]];
1881 1884
1882 1885 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "remove-reprog");
1883 1886
1884 1887 ASSERT(nelems != 0);
1885 1888 be->cyb_reprogram(bar, cyclic->cy_expire);
1886 1889 out:
1887 1890 be->cyb_restore_level(bar, cookie);
1888 1891 }
1889 1892
1890 1893 static int
1891 1894 cyclic_remove_here(cyc_cpu_t *cpu, cyc_index_t ndx, cyc_time_t *when, int wait)
1892 1895 {
1893 1896 cyc_backend_t *be = cpu->cyp_backend;
1894 1897 cyc_xcallarg_t arg;
1895 1898 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
1896 1899 cyc_level_t level = cyclic->cy_level;
1897 1900
1898 1901 ASSERT(MUTEX_HELD(&cpu_lock));
1899 1902 ASSERT(cpu->cyp_rpend == 0);
1900 1903 ASSERT(wait == CY_WAIT || wait == CY_NOWAIT);
1901 1904
1902 1905 arg.cyx_ndx = ndx;
1903 1906 arg.cyx_cpu = cpu;
1904 1907 arg.cyx_when = when;
1905 1908 arg.cyx_wait = wait;
1906 1909
1907 1910 ASSERT(cpu->cyp_state == CYS_ONLINE);
1908 1911 cpu->cyp_state = CYS_REMOVING;
1909 1912
1910 1913 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
1911 1914 (cyc_func_t)cyclic_remove_xcall, &arg);
1912 1915
1913 1916 /*
1914 1917 * If the cyclic we removed wasn't at CY_HIGH_LEVEL, then we need to
1915 1918 * check the cyp_rpend. If it's non-zero, then we need to wait here
1916 1919 * for all pending cyclic handlers to run.
1917 1920 */
1918 1921 ASSERT(!(level == CY_HIGH_LEVEL && cpu->cyp_rpend != 0));
1919 1922 ASSERT(!(wait == CY_NOWAIT && cpu->cyp_rpend != 0));
1920 1923 ASSERT(!(arg.cyx_wait == CY_NOWAIT && cpu->cyp_rpend != 0));
1921 1924
1922 1925 if (wait != arg.cyx_wait) {
1923 1926 /*
1924 1927 * We are being told that we must wait if we want to
1925 1928 * remove this cyclic; put the CPU back in the CYS_ONLINE
1926 1929 * state and return failure.
1927 1930 */
1928 1931 ASSERT(wait == CY_NOWAIT && arg.cyx_wait == CY_WAIT);
1929 1932 ASSERT(cpu->cyp_state == CYS_REMOVING);
1930 1933 cpu->cyp_state = CYS_ONLINE;
1931 1934
1932 1935 return (0);
1933 1936 }
1934 1937
1935 1938 if (cpu->cyp_rpend != 0)
1936 1939 sema_p(&cpu->cyp_modify_wait);
1937 1940
1938 1941 ASSERT(cpu->cyp_state == CYS_REMOVING);
1939 1942
1940 1943 cpu->cyp_rpend = 0;
1941 1944 cpu->cyp_state = CYS_ONLINE;
1942 1945
1943 1946 return (1);
1944 1947 }
1945 1948
1946 1949 /*
1947 1950 * If cyclic_reprogram() is called on the same CPU as the cyclic's CPU, then
1948 1951 * it calls this function directly. Else, it invokes this function through
1949 1952 * an X-call to the cyclic's CPU.
1950 1953 */
1951 1954 static void
1952 1955 cyclic_reprogram_cyclic(cyc_cpu_t *cpu, cyc_index_t ndx, hrtime_t expire)
1953 1956 {
1954 1957 cyc_backend_t *be = cpu->cyp_backend;
1955 1958 cyb_arg_t bar = be->cyb_arg;
1956 1959 cyc_cookie_t cookie;
1957 1960 cyc_index_t nelems, i;
1958 1961 cyc_index_t *heap;
1959 1962 cyclic_t *cyclic;
1960 1963 hrtime_t oexpire;
1961 1964 int reprog;
1962 1965
1963 1966 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
1964 1967
1965 1968 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "reprog-xcall", ndx);
1966 1969
1967 1970 nelems = cpu->cyp_nelems;
1968 1971 ASSERT(nelems > 0);
1969 1972 heap = cpu->cyp_heap;
1970 1973
1971 1974 /*
1972 1975 * Reprogrammed cyclics are typically one-shot ones that get
1973 1976 * set to infinity on every expiration. We shorten the search by
1974 1977 * searching from the bottom of the heap to the top instead of the
1975 1978 * other way around.
1976 1979 */
1977 1980 for (i = nelems - 1; i >= 0; i--) {
1978 1981 if (heap[i] == ndx)
1979 1982 break;
1980 1983 }
1981 1984 if (i < 0)
1982 1985 panic("attempt to reprogram non-existent cyclic");
1983 1986
1984 1987 cyclic = &cpu->cyp_cyclics[ndx];
1985 1988 oexpire = cyclic->cy_expire;
1986 1989 cyclic->cy_expire = expire;
1987 1990
1988 1991 reprog = (i == 0);
1989 1992 if (expire > oexpire) {
1990 1993 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "reprog-down", i);
1991 1994 cyclic_downheap(cpu, i);
1992 1995 } else if (i > 0) {
1993 1996 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "reprog-up", i);
1994 1997 reprog = cyclic_upheap(cpu, i);
1995 1998 }
1996 1999
1997 2000 if (reprog && (cpu->cyp_state != CYS_SUSPENDED)) {
1998 2001 /*
1999 2002 * The root changed. Reprogram the clock source.
2000 2003 */
2001 2004 CYC_TRACE0(cpu, CY_HIGH_LEVEL, "reprog-root");
2002 2005 cyclic = &cpu->cyp_cyclics[heap[0]];
2003 2006 be->cyb_reprogram(bar, cyclic->cy_expire);
2004 2007 }
2005 2008
2006 2009 be->cyb_restore_level(bar, cookie);
2007 2010 }
2008 2011
2009 2012 static void
2010 2013 cyclic_reprogram_xcall(cyc_xcallarg_t *arg)
2011 2014 {
2012 2015 cyclic_reprogram_cyclic(arg->cyx_cpu, arg->cyx_ndx,
2013 2016 arg->cyx_when->cyt_when);
2014 2017 }
2015 2018
2016 2019 static void
2017 2020 cyclic_reprogram_here(cyc_cpu_t *cpu, cyc_index_t ndx, hrtime_t expiration)
2018 2021 {
2019 2022 cyc_backend_t *be = cpu->cyp_backend;
2020 2023 cyc_xcallarg_t arg;
2021 2024 cyc_time_t when;
2022 2025
2023 2026 ASSERT(expiration > 0);
2024 2027
2025 2028 arg.cyx_ndx = ndx;
2026 2029 arg.cyx_cpu = cpu;
2027 2030 arg.cyx_when = &when;
2028 2031 when.cyt_when = expiration;
2029 2032
2030 2033 be->cyb_xcall(be->cyb_arg, cpu->cyp_cpu,
2031 2034 (cyc_func_t)cyclic_reprogram_xcall, &arg);
2032 2035 }
2033 2036
2034 2037 /*
2035 2038 * cyclic_juggle_one_to() should only be called when the source cyclic
2036 2039 * can be juggled and the destination CPU is known to be able to accept
2037 2040 * it.
2038 2041 */
2039 2042 static void
2040 2043 cyclic_juggle_one_to(cyc_id_t *idp, cyc_cpu_t *dest)
2041 2044 {
2042 2045 cyc_cpu_t *src = idp->cyi_cpu;
2043 2046 cyc_index_t ndx = idp->cyi_ndx;
2044 2047 cyc_time_t when;
2045 2048 cyc_handler_t hdlr;
2046 2049 cyclic_t *cyclic;
2047 2050 uint16_t flags;
2048 2051 hrtime_t delay;
2049 2052
2050 2053 ASSERT(MUTEX_HELD(&cpu_lock));
2051 2054 ASSERT(src != NULL && idp->cyi_omni_list == NULL);
2052 2055 ASSERT(!(dest->cyp_cpu->cpu_flags & (CPU_QUIESCED | CPU_OFFLINE)));
2053 2056 CYC_PTRACE("juggle-one-to", idp, dest);
2054 2057
2055 2058 cyclic = &src->cyp_cyclics[ndx];
2056 2059
2057 2060 flags = cyclic->cy_flags;
2058 2061 ASSERT(!(flags & CYF_CPU_BOUND) && !(flags & CYF_FREE));
2059 2062
2060 2063 hdlr.cyh_func = cyclic->cy_handler;
2061 2064 hdlr.cyh_level = cyclic->cy_level;
2062 2065 hdlr.cyh_arg = cyclic->cy_arg;
2063 2066
2064 2067 /*
2065 2068 * Before we begin the juggling process, see if the destination
2066 2069 * CPU requires an expansion. If it does, we'll perform the
2067 2070 * expansion before removing the cyclic. This is to prevent us
2068 2071 * from blocking while a system-critical cyclic (notably, the clock
2069 2072 * cyclic) isn't on a CPU.
2070 2073 */
2071 2074 if (dest->cyp_nelems == dest->cyp_size) {
2072 2075 CYC_PTRACE("remove-expand", idp, dest);
2073 2076 cyclic_expand(dest);
2074 2077 ASSERT(dest->cyp_nelems < dest->cyp_size);
2075 2078 }
2076 2079
2077 2080 /*
2078 2081 * Prevent a reprogram of this cyclic while we are relocating it.
2079 2082 * Otherwise, cyclic_reprogram_here() will end up sending an X-call
2080 2083 * to the wrong CPU.
2081 2084 */
2082 2085 rw_enter(&idp->cyi_lock, RW_WRITER);
2083 2086
2084 2087 /*
2085 2088 * Remove the cyclic from the source. As mentioned above, we cannot
2086 2089 * block during this operation; if we cannot remove the cyclic
2087 2090 * without waiting, we spin for a time shorter than the interval, and
2088 2091 * reattempt the (non-blocking) removal. If we continue to fail,
2089 2092 * we will exponentially back off (up to half of the interval).
2090 2093 * Note that the removal will ultimately succeed -- even if the
2091 2094 * cyclic handler is blocked on a resource held by a thread which we
2092 2095 * have preempted, priority inheritance assures that the preempted
2093 2096 * thread will preempt us and continue to progress.
2094 2097 */
2095 2098 for (delay = NANOSEC / MICROSEC; ; delay <<= 1) {
2096 2099 /*
2097 2100 * Before we begin this operation, disable kernel preemption.
2098 2101 */
2099 2102 kpreempt_disable();
2100 2103 if (cyclic_remove_here(src, ndx, &when, CY_NOWAIT))
2101 2104 break;
2102 2105
2103 2106 /*
2104 2107 * The operation failed; enable kernel preemption while
2105 2108 * spinning.
2106 2109 */
2107 2110 kpreempt_enable();
2108 2111
2109 2112 CYC_PTRACE("remove-retry", idp, src);
2110 2113
2111 2114 if (delay > (cyclic->cy_interval >> 1))
2112 2115 delay = cyclic->cy_interval >> 1;
2113 2116
2114 2117 /*
2115 2118 * Drop the RW lock to avoid a deadlock with the cyclic
2116 2119 * handler (because it can potentially call cyclic_reprogram().
2117 2120 */
2118 2121 rw_exit(&idp->cyi_lock);
2119 2122 drv_usecwait((clock_t)(delay / (NANOSEC / MICROSEC)));
2120 2123 rw_enter(&idp->cyi_lock, RW_WRITER);
2121 2124 }
2122 2125
2123 2126 /*
2124 2127 * Now add the cyclic to the destination. This won't block; we
2125 2128 * performed any necessary (blocking) expansion of the destination
2126 2129 * CPU before removing the cyclic from the source CPU.
2127 2130 */
2128 2131 idp->cyi_ndx = cyclic_add_here(dest, &hdlr, &when, flags);
2129 2132 idp->cyi_cpu = dest;
2130 2133 kpreempt_enable();
2131 2134
2132 2135 /*
2133 2136 * Now that we have successfully relocated the cyclic, allow
2134 2137 * it to be reprogrammed.
2135 2138 */
2136 2139 rw_exit(&idp->cyi_lock);
2137 2140 }
2138 2141
2139 2142 static int
2140 2143 cyclic_juggle_one(cyc_id_t *idp)
2141 2144 {
2142 2145 cyc_index_t ndx = idp->cyi_ndx;
2143 2146 cyc_cpu_t *cpu = idp->cyi_cpu, *dest;
2144 2147 cyclic_t *cyclic = &cpu->cyp_cyclics[ndx];
2145 2148 cpu_t *c = cpu->cyp_cpu;
2146 2149 cpupart_t *part = c->cpu_part;
2147 2150
2148 2151 CYC_PTRACE("juggle-one", idp, cpu);
2149 2152 ASSERT(MUTEX_HELD(&cpu_lock));
2150 2153 ASSERT(!(c->cpu_flags & CPU_OFFLINE));
2151 2154 ASSERT(cpu->cyp_state == CYS_ONLINE);
2152 2155 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2153 2156
2154 2157 if ((dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags)) == NULL) {
2155 2158 /*
2156 2159 * Bad news: this cyclic can't be juggled.
2157 2160 */
2158 2161 CYC_PTRACE("juggle-fail", idp, cpu)
2159 2162 return (0);
2160 2163 }
2161 2164
2162 2165 cyclic_juggle_one_to(idp, dest);
2163 2166
2164 2167 return (1);
2165 2168 }
2166 2169
2167 2170 static void
2168 2171 cyclic_unbind_cpu(cyclic_id_t id)
2169 2172 {
2170 2173 cyc_id_t *idp = (cyc_id_t *)id;
2171 2174 cyc_cpu_t *cpu = idp->cyi_cpu;
2172 2175 cpu_t *c = cpu->cyp_cpu;
2173 2176 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2174 2177
2175 2178 CYC_PTRACE("unbind-cpu", id, cpu);
2176 2179 ASSERT(MUTEX_HELD(&cpu_lock));
2177 2180 ASSERT(cpu->cyp_state == CYS_ONLINE);
2178 2181 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2179 2182 ASSERT(cyclic->cy_flags & CYF_CPU_BOUND);
2180 2183
2181 2184 cyclic->cy_flags &= ~CYF_CPU_BOUND;
2182 2185
2183 2186 /*
2184 2187 * If we were bound to CPU which has interrupts disabled, we need
2185 2188 * to juggle away. This can only fail if we are bound to a
2186 2189 * processor set, and if every CPU in the processor set has
2187 2190 * interrupts disabled.
2188 2191 */
2189 2192 if (!(c->cpu_flags & CPU_ENABLE)) {
2190 2193 int res = cyclic_juggle_one(idp);
2191 2194
2192 2195 ASSERT((res && idp->cyi_cpu != cpu) ||
2193 2196 (!res && (cyclic->cy_flags & CYF_PART_BOUND)));
2194 2197 }
2195 2198 }
2196 2199
2197 2200 static void
2198 2201 cyclic_bind_cpu(cyclic_id_t id, cpu_t *d)
2199 2202 {
2200 2203 cyc_id_t *idp = (cyc_id_t *)id;
2201 2204 cyc_cpu_t *dest = d->cpu_cyclic, *cpu = idp->cyi_cpu;
2202 2205 cpu_t *c = cpu->cyp_cpu;
2203 2206 cyclic_t *cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
2204 2207 cpupart_t *part = c->cpu_part;
2205 2208
2206 2209 CYC_PTRACE("bind-cpu", id, dest);
2207 2210 ASSERT(MUTEX_HELD(&cpu_lock));
2208 2211 ASSERT(!(d->cpu_flags & CPU_OFFLINE));
2209 2212 ASSERT(!(c->cpu_flags & CPU_OFFLINE));
2210 2213 ASSERT(cpu->cyp_state == CYS_ONLINE);
2211 2214 ASSERT(dest != NULL);
2212 2215 ASSERT(dest->cyp_state == CYS_ONLINE);
2213 2216 ASSERT(!(cyclic->cy_flags & CYF_FREE));
2214 2217 ASSERT(!(cyclic->cy_flags & CYF_CPU_BOUND));
2215 2218
2216 2219 dest = cyclic_pick_cpu(part, d, NULL, cyclic->cy_flags | CYF_CPU_BOUND);
2217 2220
2218 2221 if (dest != cpu) {
2219 2222 cyclic_juggle_one_to(idp, dest);
2220 2223 cyclic = &dest->cyp_cyclics[idp->cyi_ndx];
2221 2224 }
2222 2225
2223 2226 cyclic->cy_flags |= CYF_CPU_BOUND;
2224 2227 }
2225 2228
2226 2229 static void
2227 2230 cyclic_unbind_cpupart(cyclic_id_t id)
2228 2231 {
2229 2232 cyc_id_t *idp = (cyc_id_t *)id;
2230 2233 cyc_cpu_t *cpu = idp->cyi_cpu;
2231 2234 cpu_t *c = cpu->cyp_cpu;
2232 2235 cyclic_t *cyc = &cpu->cyp_cyclics[idp->cyi_ndx];
2233 2236
2234 2237 CYC_PTRACE("unbind-part", idp, c->cpu_part);
2235 2238 ASSERT(MUTEX_HELD(&cpu_lock));
2236 2239 ASSERT(cpu->cyp_state == CYS_ONLINE);
2237 2240 ASSERT(!(cyc->cy_flags & CYF_FREE));
2238 2241 ASSERT(cyc->cy_flags & CYF_PART_BOUND);
2239 2242
2240 2243 cyc->cy_flags &= ~CYF_PART_BOUND;
2241 2244
2242 2245 /*
2243 2246 * If we're on a CPU which has interrupts disabled (and if this cyclic
2244 2247 * isn't bound to the CPU), we need to juggle away.
2245 2248 */
2246 2249 if (!(c->cpu_flags & CPU_ENABLE) && !(cyc->cy_flags & CYF_CPU_BOUND)) {
2247 2250 int res = cyclic_juggle_one(idp);
2248 2251
2249 2252 ASSERT(res && idp->cyi_cpu != cpu);
2250 2253 }
2251 2254 }
2252 2255
2253 2256 static void
2254 2257 cyclic_bind_cpupart(cyclic_id_t id, cpupart_t *part)
2255 2258 {
2256 2259 cyc_id_t *idp = (cyc_id_t *)id;
2257 2260 cyc_cpu_t *cpu = idp->cyi_cpu, *dest;
2258 2261 cpu_t *c = cpu->cyp_cpu;
2259 2262 cyclic_t *cyc = &cpu->cyp_cyclics[idp->cyi_ndx];
2260 2263
2261 2264 CYC_PTRACE("bind-part", idp, part);
2262 2265 ASSERT(MUTEX_HELD(&cpu_lock));
2263 2266 ASSERT(!(c->cpu_flags & CPU_OFFLINE));
2264 2267 ASSERT(cpu->cyp_state == CYS_ONLINE);
2265 2268 ASSERT(!(cyc->cy_flags & CYF_FREE));
2266 2269 ASSERT(!(cyc->cy_flags & CYF_PART_BOUND));
2267 2270 ASSERT(part->cp_ncpus > 0);
2268 2271
2269 2272 dest = cyclic_pick_cpu(part, c, NULL, cyc->cy_flags | CYF_PART_BOUND);
2270 2273
2271 2274 if (dest != cpu) {
2272 2275 cyclic_juggle_one_to(idp, dest);
2273 2276 cyc = &dest->cyp_cyclics[idp->cyi_ndx];
2274 2277 }
2275 2278
2276 2279 cyc->cy_flags |= CYF_PART_BOUND;
2277 2280 }
2278 2281
2279 2282 static void
2280 2283 cyclic_configure(cpu_t *c)
2281 2284 {
2282 2285 cyc_cpu_t *cpu = kmem_zalloc(sizeof (cyc_cpu_t), KM_SLEEP);
2283 2286 cyc_backend_t *nbe = kmem_zalloc(sizeof (cyc_backend_t), KM_SLEEP);
2284 2287 int i;
2285 2288
2286 2289 CYC_PTRACE1("configure", cpu);
2287 2290 ASSERT(MUTEX_HELD(&cpu_lock));
2288 2291
2289 2292 if (cyclic_id_cache == NULL)
2290 2293 cyclic_id_cache = kmem_cache_create("cyclic_id_cache",
2291 2294 sizeof (cyc_id_t), 0, NULL, NULL, NULL, NULL, NULL, 0);
2292 2295
2293 2296 cpu->cyp_cpu = c;
2294 2297
2295 2298 sema_init(&cpu->cyp_modify_wait, 0, NULL, SEMA_DEFAULT, NULL);
2296 2299
2297 2300 cpu->cyp_size = 1;
2298 2301 cpu->cyp_heap = kmem_zalloc(sizeof (cyc_index_t), KM_SLEEP);
2299 2302 cpu->cyp_cyclics = kmem_zalloc(sizeof (cyclic_t), KM_SLEEP);
2300 2303 cpu->cyp_cyclics->cy_flags = CYF_FREE;
2301 2304
2302 2305 for (i = CY_LOW_LEVEL; i < CY_LOW_LEVEL + CY_SOFT_LEVELS; i++) {
2303 2306 /*
2304 2307 * We don't need to set the sizemask; it's already zero
2305 2308 * (which is the appropriate sizemask for a size of 1).
2306 2309 */
2307 2310 cpu->cyp_softbuf[i].cys_buf[0].cypc_buf =
2308 2311 kmem_alloc(sizeof (cyc_index_t), KM_SLEEP);
2309 2312 }
2310 2313
2311 2314 cpu->cyp_state = CYS_OFFLINE;
2312 2315
2313 2316 /*
2314 2317 * Setup the backend for this CPU.
2315 2318 */
2316 2319 bcopy(&cyclic_backend, nbe, sizeof (cyc_backend_t));
2317 2320 nbe->cyb_arg = nbe->cyb_configure(c);
2318 2321 cpu->cyp_backend = nbe;
2319 2322
2320 2323 /*
2321 2324 * On platforms where stray interrupts may be taken during startup,
2322 2325 * the CPU's cpu_cyclic pointer serves as an indicator that the
2323 2326 * cyclic subsystem for this CPU is prepared to field interrupts.
2324 2327 */
2325 2328 membar_producer();
2326 2329
2327 2330 c->cpu_cyclic = cpu;
2328 2331 }
2329 2332
2330 2333 static void
2331 2334 cyclic_unconfigure(cpu_t *c)
2332 2335 {
2333 2336 cyc_cpu_t *cpu = c->cpu_cyclic;
2334 2337 cyc_backend_t *be = cpu->cyp_backend;
2335 2338 cyb_arg_t bar = be->cyb_arg;
2336 2339 int i;
2337 2340
2338 2341 CYC_PTRACE1("unconfigure", cpu);
2339 2342 ASSERT(MUTEX_HELD(&cpu_lock));
2340 2343 ASSERT(cpu->cyp_state == CYS_OFFLINE);
2341 2344 ASSERT(cpu->cyp_nelems == 0);
2342 2345
2343 2346 /*
2344 2347 * Let the backend know that the CPU is being yanked, and free up
2345 2348 * the backend structure.
2346 2349 */
2347 2350 be->cyb_unconfigure(bar);
2348 2351 kmem_free(be, sizeof (cyc_backend_t));
2349 2352 cpu->cyp_backend = NULL;
2350 2353
2351 2354 /*
2352 2355 * Free up the producer/consumer buffers at each of the soft levels.
2353 2356 */
2354 2357 for (i = CY_LOW_LEVEL; i < CY_LOW_LEVEL + CY_SOFT_LEVELS; i++) {
2355 2358 cyc_softbuf_t *softbuf = &cpu->cyp_softbuf[i];
2356 2359 uchar_t hard = softbuf->cys_hard;
2357 2360 cyc_pcbuffer_t *pc = &softbuf->cys_buf[hard];
2358 2361 size_t bufsize = sizeof (cyc_index_t) * (pc->cypc_sizemask + 1);
2359 2362
2360 2363 /*
2361 2364 * Assert that we're not in the middle of a resize operation.
2362 2365 */
2363 2366 ASSERT(hard == softbuf->cys_soft);
2364 2367 ASSERT(hard == 0 || hard == 1);
2365 2368 ASSERT(pc->cypc_buf != NULL);
2366 2369 ASSERT(softbuf->cys_buf[hard ^ 1].cypc_buf == NULL);
2367 2370
2368 2371 kmem_free(pc->cypc_buf, bufsize);
2369 2372 pc->cypc_buf = NULL;
2370 2373 }
2371 2374
2372 2375 /*
2373 2376 * Finally, clean up our remaining dynamic structures and NULL out
2374 2377 * the cpu_cyclic pointer.
2375 2378 */
2376 2379 kmem_free(cpu->cyp_cyclics, cpu->cyp_size * sizeof (cyclic_t));
2377 2380 kmem_free(cpu->cyp_heap, cpu->cyp_size * sizeof (cyc_index_t));
2378 2381 kmem_free(cpu, sizeof (cyc_cpu_t));
2379 2382
2380 2383 c->cpu_cyclic = NULL;
2381 2384 }
2382 2385
2383 2386 static int
2384 2387 cyclic_cpu_setup(cpu_setup_t what, int id)
2385 2388 {
2386 2389 /*
2387 2390 * We are guaranteed that there is still/already an entry in the
2388 2391 * cpu array for this CPU.
2389 2392 */
2390 2393 cpu_t *c = cpu[id];
2391 2394 cyc_cpu_t *cyp = c->cpu_cyclic;
2392 2395
2393 2396 ASSERT(MUTEX_HELD(&cpu_lock));
2394 2397
2395 2398 switch (what) {
2396 2399 case CPU_CONFIG:
2397 2400 ASSERT(cyp == NULL);
2398 2401 cyclic_configure(c);
2399 2402 break;
2400 2403
2401 2404 case CPU_UNCONFIG:
2402 2405 ASSERT(cyp != NULL && cyp->cyp_state == CYS_OFFLINE);
2403 2406 cyclic_unconfigure(c);
2404 2407 break;
2405 2408
2406 2409 default:
2407 2410 break;
2408 2411 }
2409 2412
2410 2413 return (0);
2411 2414 }
2412 2415
2413 2416 static void
2414 2417 cyclic_suspend_xcall(cyc_xcallarg_t *arg)
2415 2418 {
2416 2419 cyc_cpu_t *cpu = arg->cyx_cpu;
2417 2420 cyc_backend_t *be = cpu->cyp_backend;
2418 2421 cyc_cookie_t cookie;
2419 2422 cyb_arg_t bar = be->cyb_arg;
2420 2423
2421 2424 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
2422 2425
2423 2426 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "suspend-xcall", cpu->cyp_nelems);
2424 2427 ASSERT(cpu->cyp_state == CYS_ONLINE || cpu->cyp_state == CYS_OFFLINE);
2425 2428
2426 2429 /*
2427 2430 * We won't disable this CPU unless it has a non-zero number of
2428 2431 * elements (cpu_lock assures that no one else may be attempting
2429 2432 * to disable this CPU).
2430 2433 */
2431 2434 if (cpu->cyp_nelems > 0) {
2432 2435 ASSERT(cpu->cyp_state == CYS_ONLINE);
2433 2436 be->cyb_disable(bar);
2434 2437 }
2435 2438
2436 2439 if (cpu->cyp_state == CYS_ONLINE)
2437 2440 cpu->cyp_state = CYS_SUSPENDED;
2438 2441
2439 2442 be->cyb_suspend(bar);
2440 2443 be->cyb_restore_level(bar, cookie);
2441 2444 }
2442 2445
2443 2446 static void
2444 2447 cyclic_resume_xcall(cyc_xcallarg_t *arg)
2445 2448 {
2446 2449 cyc_cpu_t *cpu = arg->cyx_cpu;
2447 2450 cyc_backend_t *be = cpu->cyp_backend;
2448 2451 cyc_cookie_t cookie;
2449 2452 cyb_arg_t bar = be->cyb_arg;
2450 2453 cyc_state_t state = cpu->cyp_state;
2451 2454
2452 2455 cookie = be->cyb_set_level(bar, CY_HIGH_LEVEL);
2453 2456
2454 2457 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "resume-xcall", cpu->cyp_nelems);
2455 2458 ASSERT(state == CYS_SUSPENDED || state == CYS_OFFLINE);
2456 2459
2457 2460 be->cyb_resume(bar);
2458 2461
2459 2462 /*
2460 2463 * We won't enable this CPU unless it has a non-zero number of
2461 2464 * elements.
2462 2465 */
2463 2466 if (cpu->cyp_nelems > 0) {
2464 2467 cyclic_t *cyclic = &cpu->cyp_cyclics[cpu->cyp_heap[0]];
2465 2468 hrtime_t exp = cyclic->cy_expire;
2466 2469
2467 2470 CYC_TRACE(cpu, CY_HIGH_LEVEL, "resume-reprog", cyclic, exp);
2468 2471 ASSERT(state == CYS_SUSPENDED);
2469 2472 be->cyb_enable(bar);
2470 2473 be->cyb_reprogram(bar, exp);
2471 2474 }
2472 2475
2473 2476 if (state == CYS_SUSPENDED)
2474 2477 cpu->cyp_state = CYS_ONLINE;
2475 2478
2476 2479 CYC_TRACE1(cpu, CY_HIGH_LEVEL, "resume-done", cpu->cyp_nelems);
2477 2480 be->cyb_restore_level(bar, cookie);
2478 2481 }
2479 2482
2480 2483 static void
2481 2484 cyclic_omni_start(cyc_id_t *idp, cyc_cpu_t *cpu)
2482 2485 {
2483 2486 cyc_omni_handler_t *omni = &idp->cyi_omni_hdlr;
2484 2487 cyc_omni_cpu_t *ocpu = kmem_alloc(sizeof (cyc_omni_cpu_t), KM_SLEEP);
2485 2488 cyc_handler_t hdlr;
2486 2489 cyc_time_t when;
2487 2490
2488 2491 CYC_PTRACE("omni-start", cpu, idp);
2489 2492 ASSERT(MUTEX_HELD(&cpu_lock));
2490 2493 ASSERT(cpu->cyp_state == CYS_ONLINE);
2491 2494 ASSERT(idp->cyi_cpu == NULL);
2492 2495
2493 2496 hdlr.cyh_func = NULL;
2494 2497 hdlr.cyh_arg = NULL;
2495 2498 hdlr.cyh_level = CY_LEVELS;
2496 2499
2497 2500 when.cyt_when = 0;
2498 2501 when.cyt_interval = 0;
2499 2502
2500 2503 omni->cyo_online(omni->cyo_arg, cpu->cyp_cpu, &hdlr, &when);
2501 2504
2502 2505 ASSERT(hdlr.cyh_func != NULL);
2503 2506 ASSERT(hdlr.cyh_level < CY_LEVELS);
2504 2507 ASSERT(when.cyt_when >= 0 && when.cyt_interval > 0);
2505 2508
2506 2509 ocpu->cyo_cpu = cpu;
2507 2510 ocpu->cyo_arg = hdlr.cyh_arg;
2508 2511 ocpu->cyo_ndx = cyclic_add_here(cpu, &hdlr, &when, 0);
2509 2512 ocpu->cyo_next = idp->cyi_omni_list;
2510 2513 idp->cyi_omni_list = ocpu;
2511 2514 }
2512 2515
2513 2516 static void
2514 2517 cyclic_omni_stop(cyc_id_t *idp, cyc_cpu_t *cpu)
2515 2518 {
2516 2519 cyc_omni_handler_t *omni = &idp->cyi_omni_hdlr;
2517 2520 cyc_omni_cpu_t *ocpu = idp->cyi_omni_list, *prev = NULL;
2518 2521 clock_t delay;
2519 2522 int ret;
2520 2523
2521 2524 CYC_PTRACE("omni-stop", cpu, idp);
2522 2525 ASSERT(MUTEX_HELD(&cpu_lock));
2523 2526 ASSERT(cpu->cyp_state == CYS_ONLINE);
2524 2527 ASSERT(idp->cyi_cpu == NULL);
2525 2528 ASSERT(ocpu != NULL);
2526 2529
2527 2530 /*
2528 2531 * Prevent a reprogram of this cyclic while we are removing it.
2529 2532 * Otherwise, cyclic_reprogram_here() will end up sending an X-call
2530 2533 * to the offlined CPU.
2531 2534 */
2532 2535 rw_enter(&idp->cyi_lock, RW_WRITER);
2533 2536
2534 2537 while (ocpu != NULL && ocpu->cyo_cpu != cpu) {
2535 2538 prev = ocpu;
2536 2539 ocpu = ocpu->cyo_next;
2537 2540 }
2538 2541
2539 2542 /*
2540 2543 * We _must_ have found an cyc_omni_cpu which corresponds to this
2541 2544 * CPU -- the definition of an omnipresent cyclic is that it runs
2542 2545 * on all online CPUs.
2543 2546 */
2544 2547 ASSERT(ocpu != NULL);
2545 2548
2546 2549 if (prev == NULL) {
2547 2550 idp->cyi_omni_list = ocpu->cyo_next;
2548 2551 } else {
2549 2552 prev->cyo_next = ocpu->cyo_next;
2550 2553 }
2551 2554
2552 2555 /*
2553 2556 * Remove the cyclic from the source. We cannot block during this
2554 2557 * operation because we are holding the cyi_lock which can be held
2555 2558 * by the cyclic handler via cyclic_reprogram().
2556 2559 *
2557 2560 * If we cannot remove the cyclic without waiting, we spin for a time,
2558 2561 * and reattempt the (non-blocking) removal. If the handler is blocked
2559 2562 * on the cyi_lock, then we let go of it in the spin loop to give
2560 2563 * the handler a chance to run. Note that the removal will ultimately
2561 2564 * succeed -- even if the cyclic handler is blocked on a resource
2562 2565 * held by a thread which we have preempted, priority inheritance
2563 2566 * assures that the preempted thread will preempt us and continue
2564 2567 * to progress.
2565 2568 */
2566 2569 for (delay = 1; ; delay <<= 1) {
2567 2570 /*
2568 2571 * Before we begin this operation, disable kernel preemption.
2569 2572 */
2570 2573 kpreempt_disable();
2571 2574 ret = cyclic_remove_here(ocpu->cyo_cpu, ocpu->cyo_ndx, NULL,
2572 2575 CY_NOWAIT);
2573 2576 /*
2574 2577 * Enable kernel preemption while spinning.
2575 2578 */
2576 2579 kpreempt_enable();
2577 2580
2578 2581 if (ret)
2579 2582 break;
2580 2583
2581 2584 CYC_PTRACE("remove-omni-retry", idp, ocpu->cyo_cpu);
2582 2585
2583 2586 /*
2584 2587 * Drop the RW lock to avoid a deadlock with the cyclic
2585 2588 * handler (because it can potentially call cyclic_reprogram().
2586 2589 */
2587 2590 rw_exit(&idp->cyi_lock);
2588 2591 drv_usecwait(delay);
2589 2592 rw_enter(&idp->cyi_lock, RW_WRITER);
2590 2593 }
2591 2594
2592 2595 /*
2593 2596 * Now that we have successfully removed the cyclic, allow the omni
2594 2597 * cyclic to be reprogrammed on other CPUs.
2595 2598 */
2596 2599 rw_exit(&idp->cyi_lock);
2597 2600
2598 2601 /*
2599 2602 * The cyclic has been removed from this CPU; time to call the
2600 2603 * omnipresent offline handler.
2601 2604 */
2602 2605 if (omni->cyo_offline != NULL)
2603 2606 omni->cyo_offline(omni->cyo_arg, cpu->cyp_cpu, ocpu->cyo_arg);
2604 2607
2605 2608 kmem_free(ocpu, sizeof (cyc_omni_cpu_t));
2606 2609 }
2607 2610
2608 2611 static cyc_id_t *
2609 2612 cyclic_new_id()
2610 2613 {
2611 2614 cyc_id_t *idp;
2612 2615
2613 2616 ASSERT(MUTEX_HELD(&cpu_lock));
2614 2617
2615 2618 idp = kmem_cache_alloc(cyclic_id_cache, KM_SLEEP);
2616 2619
2617 2620 /*
2618 2621 * The cyi_cpu field of the cyc_id_t structure tracks the CPU
2619 2622 * associated with the cyclic. If and only if this field is NULL, the
2620 2623 * cyc_id_t is an omnipresent cyclic. Note that cyi_omni_list may be
2621 2624 * NULL for an omnipresent cyclic while the cyclic is being created
2622 2625 * or destroyed.
2623 2626 */
2624 2627 idp->cyi_cpu = NULL;
2625 2628 idp->cyi_ndx = 0;
2626 2629 rw_init(&idp->cyi_lock, NULL, RW_DEFAULT, NULL);
2627 2630
2628 2631 idp->cyi_next = cyclic_id_head;
2629 2632 idp->cyi_prev = NULL;
2630 2633 idp->cyi_omni_list = NULL;
2631 2634
2632 2635 if (cyclic_id_head != NULL) {
2633 2636 ASSERT(cyclic_id_head->cyi_prev == NULL);
2634 2637 cyclic_id_head->cyi_prev = idp;
2635 2638 }
2636 2639
2637 2640 cyclic_id_head = idp;
2638 2641
2639 2642 return (idp);
2640 2643 }
2641 2644
2642 2645 /*
2643 2646 * cyclic_id_t cyclic_add(cyc_handler_t *, cyc_time_t *)
2644 2647 *
2645 2648 * Overview
2646 2649 *
2647 2650 * cyclic_add() will create an unbound cyclic with the specified handler and
2648 2651 * interval. The cyclic will run on a CPU which both has interrupts enabled
2649 2652 * and is in the system CPU partition.
2650 2653 *
2651 2654 * Arguments and notes
2652 2655 *
2653 2656 * As its first argument, cyclic_add() takes a cyc_handler, which has the
2654 2657 * following members:
2655 2658 *
2656 2659 * cyc_func_t cyh_func <-- Cyclic handler
2657 2660 * void *cyh_arg <-- Argument to cyclic handler
2658 2661 * cyc_level_t cyh_level <-- Level at which to fire; must be one of
2659 2662 * CY_LOW_LEVEL, CY_LOCK_LEVEL or CY_HIGH_LEVEL
2660 2663 *
2661 2664 * Note that cyh_level is _not_ an ipl or spl; it must be one the
2662 2665 * CY_*_LEVELs. This layer of abstraction allows the platform to define
2663 2666 * the precise interrupt priority levels, within the following constraints:
2664 2667 *
2665 2668 * CY_LOCK_LEVEL must map to LOCK_LEVEL
2666 2669 * CY_HIGH_LEVEL must map to an ipl greater than LOCK_LEVEL
2667 2670 * CY_LOW_LEVEL must map to an ipl below LOCK_LEVEL
2668 2671 *
2669 2672 * In addition to a cyc_handler, cyclic_add() takes a cyc_time, which
2670 2673 * has the following members:
2671 2674 *
2672 2675 * hrtime_t cyt_when <-- Absolute time, in nanoseconds since boot, at
2673 2676 * which to start firing
2674 2677 * hrtime_t cyt_interval <-- Length of interval, in nanoseconds
2675 2678 *
2676 2679 * gethrtime() is the time source for nanoseconds since boot. If cyt_when
2677 2680 * is set to 0, the cyclic will start to fire when cyt_interval next
2678 2681 * divides the number of nanoseconds since boot.
2679 2682 *
2680 2683 * The cyt_interval field _must_ be filled in by the caller; one-shots are
2681 2684 * _not_ explicitly supported by the cyclic subsystem (cyclic_add() will
2682 2685 * assert that cyt_interval is non-zero). The maximum value for either
2683 2686 * field is INT64_MAX; the caller is responsible for assuring that
2684 2687 * cyt_when + cyt_interval <= INT64_MAX. Neither field may be negative.
2685 2688 *
2686 2689 * For an arbitrary time t in the future, the cyclic handler is guaranteed
2687 2690 * to have been called (t - cyt_when) / cyt_interval times. This will
2688 2691 * be true even if interrupts have been disabled for periods greater than
2689 2692 * cyt_interval nanoseconds. In order to compensate for such periods,
2690 2693 * the cyclic handler may be called a finite number of times with an
2691 2694 * arbitrarily small interval.
2692 2695 *
2693 2696 * The cyclic subsystem will not enforce any lower bound on the interval;
2694 2697 * if the interval is less than the time required to process an interrupt,
2695 2698 * the CPU will wedge. It's the responsibility of the caller to assure that
2696 2699 * either the value of the interval is sane, or that its caller has
2697 2700 * sufficient privilege to deny service (i.e. its caller is root).
2698 2701 *
2699 2702 * The cyclic handler is guaranteed to be single threaded, even while the
2700 2703 * cyclic is being juggled between CPUs (see cyclic_juggle(), below).
2701 2704 * That is, a given cyclic handler will never be executed simultaneously
2702 2705 * on different CPUs.
2703 2706 *
2704 2707 * Return value
2705 2708 *
2706 2709 * cyclic_add() returns a cyclic_id_t, which is guaranteed to be a value
2707 2710 * other than CYCLIC_NONE. cyclic_add() cannot fail.
2708 2711 *
2709 2712 * Caller's context
2710 2713 *
2711 2714 * cpu_lock must be held by the caller, and the caller must not be in
2712 2715 * interrupt context. cyclic_add() will perform a KM_SLEEP kernel
2713 2716 * memory allocation, so the usual rules (e.g. p_lock cannot be held)
2714 2717 * apply. A cyclic may be added even in the presence of CPUs that have
2715 2718 * not been configured with respect to the cyclic subsystem, but only
2716 2719 * configured CPUs will be eligible to run the new cyclic.
2717 2720 *
2718 2721 * Cyclic handler's context
2719 2722 *
2720 2723 * Cyclic handlers will be executed in the interrupt context corresponding
2721 2724 * to the specified level (i.e. either high, lock or low level). The
2722 2725 * usual context rules apply.
2723 2726 *
2724 2727 * A cyclic handler may not grab ANY locks held by the caller of any of
2725 2728 * cyclic_add(), cyclic_remove() or cyclic_bind(); the implementation of
2726 2729 * these functions may require blocking on cyclic handler completion.
2727 2730 * Moreover, cyclic handlers may not make any call back into the cyclic
2728 2731 * subsystem.
2729 2732 */
2730 2733 cyclic_id_t
2731 2734 cyclic_add(cyc_handler_t *hdlr, cyc_time_t *when)
2732 2735 {
2733 2736 cyc_id_t *idp = cyclic_new_id();
2734 2737
2735 2738 ASSERT(MUTEX_HELD(&cpu_lock));
2736 2739 ASSERT(when->cyt_when >= 0 && when->cyt_interval > 0);
2737 2740
2738 2741 idp->cyi_cpu = cyclic_pick_cpu(NULL, NULL, NULL, 0);
2739 2742 idp->cyi_ndx = cyclic_add_here(idp->cyi_cpu, hdlr, when, 0);
2740 2743
2741 2744 return ((uintptr_t)idp);
2742 2745 }
2743 2746
2744 2747 /*
2745 2748 * cyclic_id_t cyclic_add_omni(cyc_omni_handler_t *)
2746 2749 *
2747 2750 * Overview
2748 2751 *
2749 2752 * cyclic_add_omni() will create an omnipresent cyclic with the specified
2750 2753 * online and offline handlers. Omnipresent cyclics run on all online
2751 2754 * CPUs, including CPUs which have unbound interrupts disabled.
2752 2755 *
2753 2756 * Arguments
2754 2757 *
2755 2758 * As its only argument, cyclic_add_omni() takes a cyc_omni_handler, which
2756 2759 * has the following members:
2757 2760 *
2758 2761 * void (*cyo_online)() <-- Online handler
2759 2762 * void (*cyo_offline)() <-- Offline handler
2760 2763 * void *cyo_arg <-- Argument to be passed to on/offline handlers
2761 2764 *
2762 2765 * Online handler
2763 2766 *
2764 2767 * The cyo_online member is a pointer to a function which has the following
2765 2768 * four arguments:
2766 2769 *
2767 2770 * void * <-- Argument (cyo_arg)
2768 2771 * cpu_t * <-- Pointer to CPU about to be onlined
2769 2772 * cyc_handler_t * <-- Pointer to cyc_handler_t; must be filled in
2770 2773 * by omni online handler
2771 2774 * cyc_time_t * <-- Pointer to cyc_time_t; must be filled in by
2772 2775 * omni online handler
2773 2776 *
2774 2777 * The omni cyclic online handler is always called _before_ the omni
2775 2778 * cyclic begins to fire on the specified CPU. As the above argument
2776 2779 * description implies, the online handler must fill in the two structures
2777 2780 * passed to it: the cyc_handler_t and the cyc_time_t. These are the
2778 2781 * same two structures passed to cyclic_add(), outlined above. This
2779 2782 * allows the omni cyclic to have maximum flexibility; different CPUs may
2780 2783 * optionally
2781 2784 *
2782 2785 * (a) have different intervals
2783 2786 * (b) be explicitly in or out of phase with one another
2784 2787 * (c) have different handlers
2785 2788 * (d) have different handler arguments
2786 2789 * (e) fire at different levels
2787 2790 *
2788 2791 * Of these, (e) seems somewhat dubious, but is nonetheless allowed.
2789 2792 *
2790 2793 * The omni online handler is called in the same context as cyclic_add(),
2791 2794 * and has the same liberties: omni online handlers may perform KM_SLEEP
2792 2795 * kernel memory allocations, and may grab locks which are also acquired
2793 2796 * by cyclic handlers. However, omni cyclic online handlers may _not_
2794 2797 * call back into the cyclic subsystem, and should be generally careful
2795 2798 * about calling into arbitrary kernel subsystems.
2796 2799 *
2797 2800 * Offline handler
2798 2801 *
2799 2802 * The cyo_offline member is a pointer to a function which has the following
2800 2803 * three arguments:
2801 2804 *
2802 2805 * void * <-- Argument (cyo_arg)
2803 2806 * cpu_t * <-- Pointer to CPU about to be offlined
2804 2807 * void * <-- CPU's cyclic argument (that is, value
2805 2808 * to which cyh_arg member of the cyc_handler_t
2806 2809 * was set in the omni online handler)
2807 2810 *
2808 2811 * The omni cyclic offline handler is always called _after_ the omni
2809 2812 * cyclic has ceased firing on the specified CPU. Its purpose is to
2810 2813 * allow cleanup of any resources dynamically allocated in the omni cyclic
2811 2814 * online handler. The context of the offline handler is identical to
2812 2815 * that of the online handler; the same constraints and liberties apply.
2813 2816 *
2814 2817 * The offline handler is optional; it may be NULL.
2815 2818 *
2816 2819 * Return value
2817 2820 *
2818 2821 * cyclic_add_omni() returns a cyclic_id_t, which is guaranteed to be a
2819 2822 * value other than CYCLIC_NONE. cyclic_add_omni() cannot fail.
2820 2823 *
2821 2824 * Caller's context
2822 2825 *
2823 2826 * The caller's context is identical to that of cyclic_add(), specified
2824 2827 * above.
2825 2828 */
2826 2829 cyclic_id_t
2827 2830 cyclic_add_omni(cyc_omni_handler_t *omni)
2828 2831 {
2829 2832 cyc_id_t *idp = cyclic_new_id();
2830 2833 cyc_cpu_t *cpu;
2831 2834 cpu_t *c;
2832 2835
2833 2836 ASSERT(MUTEX_HELD(&cpu_lock));
2834 2837 ASSERT(omni != NULL && omni->cyo_online != NULL);
2835 2838
2836 2839 idp->cyi_omni_hdlr = *omni;
2837 2840
2838 2841 c = cpu_list;
2839 2842 do {
2840 2843 if ((cpu = c->cpu_cyclic) == NULL)
2841 2844 continue;
2842 2845
2843 2846 if (cpu->cyp_state != CYS_ONLINE) {
2844 2847 ASSERT(cpu->cyp_state == CYS_OFFLINE);
2845 2848 continue;
2846 2849 }
2847 2850
2848 2851 cyclic_omni_start(idp, cpu);
2849 2852 } while ((c = c->cpu_next) != cpu_list);
2850 2853
2851 2854 /*
2852 2855 * We must have found at least one online CPU on which to run
2853 2856 * this cyclic.
2854 2857 */
2855 2858 ASSERT(idp->cyi_omni_list != NULL);
2856 2859 ASSERT(idp->cyi_cpu == NULL);
2857 2860
2858 2861 return ((uintptr_t)idp);
2859 2862 }
2860 2863
2861 2864 /*
2862 2865 * void cyclic_remove(cyclic_id_t)
2863 2866 *
2864 2867 * Overview
2865 2868 *
2866 2869 * cyclic_remove() will remove the specified cyclic from the system.
2867 2870 *
2868 2871 * Arguments and notes
2869 2872 *
2870 2873 * The only argument is a cyclic_id returned from either cyclic_add() or
2871 2874 * cyclic_add_omni().
2872 2875 *
2873 2876 * By the time cyclic_remove() returns, the caller is guaranteed that the
2874 2877 * removed cyclic handler has completed execution (this is the same
2875 2878 * semantic that untimeout() provides). As a result, cyclic_remove() may
2876 2879 * need to block, waiting for the removed cyclic to complete execution.
2877 2880 * This leads to an important constraint on the caller: no lock may be
2878 2881 * held across cyclic_remove() that also may be acquired by a cyclic
2879 2882 * handler.
2880 2883 *
2881 2884 * Return value
2882 2885 *
2883 2886 * None; cyclic_remove() always succeeds.
2884 2887 *
2885 2888 * Caller's context
2886 2889 *
2887 2890 * cpu_lock must be held by the caller, and the caller must not be in
2888 2891 * interrupt context. The caller may not hold any locks which are also
2889 2892 * grabbed by any cyclic handler. See "Arguments and notes", above.
2890 2893 */
2891 2894 void
2892 2895 cyclic_remove(cyclic_id_t id)
2893 2896 {
2894 2897 cyc_id_t *idp = (cyc_id_t *)id;
2895 2898 cyc_id_t *prev = idp->cyi_prev, *next = idp->cyi_next;
2896 2899 cyc_cpu_t *cpu = idp->cyi_cpu;
2897 2900
2898 2901 CYC_PTRACE("remove", idp, idp->cyi_cpu);
2899 2902 ASSERT(MUTEX_HELD(&cpu_lock));
2900 2903
2901 2904 if (cpu != NULL) {
2902 2905 (void) cyclic_remove_here(cpu, idp->cyi_ndx, NULL, CY_WAIT);
2903 2906 } else {
2904 2907 ASSERT(idp->cyi_omni_list != NULL);
2905 2908 while (idp->cyi_omni_list != NULL)
2906 2909 cyclic_omni_stop(idp, idp->cyi_omni_list->cyo_cpu);
2907 2910 }
2908 2911
2909 2912 if (prev != NULL) {
2910 2913 ASSERT(cyclic_id_head != idp);
2911 2914 prev->cyi_next = next;
2912 2915 } else {
2913 2916 ASSERT(cyclic_id_head == idp);
2914 2917 cyclic_id_head = next;
2915 2918 }
2916 2919
2917 2920 if (next != NULL)
2918 2921 next->cyi_prev = prev;
2919 2922
2920 2923 kmem_cache_free(cyclic_id_cache, idp);
2921 2924 }
2922 2925
2923 2926 /*
2924 2927 * void cyclic_bind(cyclic_id_t, cpu_t *, cpupart_t *)
2925 2928 *
2926 2929 * Overview
2927 2930 *
2928 2931 * cyclic_bind() atomically changes the CPU and CPU partition bindings
2929 2932 * of a cyclic.
2930 2933 *
2931 2934 * Arguments and notes
2932 2935 *
2933 2936 * The first argument is a cyclic_id retuned from cyclic_add().
2934 2937 * cyclic_bind() may _not_ be called on a cyclic_id returned from
2935 2938 * cyclic_add_omni().
2936 2939 *
2937 2940 * The second argument specifies the CPU to which to bind the specified
2938 2941 * cyclic. If the specified cyclic is bound to a CPU other than the one
2939 2942 * specified, it will be unbound from its bound CPU. Unbinding the cyclic
2940 2943 * from its CPU may cause it to be juggled to another CPU. If the specified
2941 2944 * CPU is non-NULL, the cyclic will be subsequently rebound to the specified
2942 2945 * CPU.
2943 2946 *
2944 2947 * If a CPU with bound cyclics is transitioned into the P_NOINTR state,
2945 2948 * only cyclics not bound to the CPU can be juggled away; CPU-bound cyclics
2946 2949 * will continue to fire on the P_NOINTR CPU. A CPU with bound cyclics
2947 2950 * cannot be offlined (attempts to offline the CPU will return EBUSY).
2948 2951 * Likewise, cyclics may not be bound to an offline CPU; if the caller
2949 2952 * attempts to bind a cyclic to an offline CPU, the cyclic subsystem will
2950 2953 * panic.
2951 2954 *
2952 2955 * The third argument specifies the CPU partition to which to bind the
2953 2956 * specified cyclic. If the specified cyclic is bound to a CPU partition
2954 2957 * other than the one specified, it will be unbound from its bound
2955 2958 * partition. Unbinding the cyclic from its CPU partition may cause it
2956 2959 * to be juggled to another CPU. If the specified CPU partition is
2957 2960 * non-NULL, the cyclic will be subsequently rebound to the specified CPU
2958 2961 * partition.
2959 2962 *
2960 2963 * It is the caller's responsibility to assure that the specified CPU
2961 2964 * partition contains a CPU. If it does not, the cyclic subsystem will
2962 2965 * panic. A CPU partition with bound cyclics cannot be destroyed (attempts
2963 2966 * to destroy the partition will return EBUSY). If a CPU with
2964 2967 * partition-bound cyclics is transitioned into the P_NOINTR state, cyclics
2965 2968 * bound to the CPU's partition (but not bound to the CPU) will be juggled
2966 2969 * away only if there exists another CPU in the partition in the P_ONLINE
2967 2970 * state.
2968 2971 *
2969 2972 * It is the caller's responsibility to assure that the specified CPU and
2970 2973 * CPU partition are self-consistent. If both parameters are non-NULL,
2971 2974 * and the specified CPU partition does not contain the specified CPU, the
2972 2975 * cyclic subsystem will panic.
2973 2976 *
2974 2977 * It is the caller's responsibility to assure that the specified CPU has
2975 2978 * been configured with respect to the cyclic subsystem. Generally, this
2976 2979 * is always true for valid, on-line CPUs. The only periods of time during
2977 2980 * which this may not be true are during MP boot (i.e. after cyclic_init()
2978 2981 * is called but before cyclic_mp_init() is called) or during dynamic
2979 2982 * reconfiguration; cyclic_bind() should only be called with great care
2980 2983 * from these contexts.
2981 2984 *
2982 2985 * Return value
2983 2986 *
2984 2987 * None; cyclic_bind() always succeeds.
2985 2988 *
2986 2989 * Caller's context
2987 2990 *
2988 2991 * cpu_lock must be held by the caller, and the caller must not be in
2989 2992 * interrupt context. The caller may not hold any locks which are also
2990 2993 * grabbed by any cyclic handler.
2991 2994 */
2992 2995 void
2993 2996 cyclic_bind(cyclic_id_t id, cpu_t *d, cpupart_t *part)
2994 2997 {
2995 2998 cyc_id_t *idp = (cyc_id_t *)id;
2996 2999 cyc_cpu_t *cpu = idp->cyi_cpu;
2997 3000 cpu_t *c;
2998 3001 uint16_t flags;
2999 3002
3000 3003 CYC_PTRACE("bind", d, part);
3001 3004 ASSERT(MUTEX_HELD(&cpu_lock));
3002 3005 ASSERT(part == NULL || d == NULL || d->cpu_part == part);
3003 3006
3004 3007 if (cpu == NULL) {
3005 3008 ASSERT(idp->cyi_omni_list != NULL);
3006 3009 panic("attempt to change binding of omnipresent cyclic");
3007 3010 }
3008 3011
3009 3012 c = cpu->cyp_cpu;
3010 3013 flags = cpu->cyp_cyclics[idp->cyi_ndx].cy_flags;
3011 3014
3012 3015 if (c != d && (flags & CYF_CPU_BOUND))
3013 3016 cyclic_unbind_cpu(id);
3014 3017
3015 3018 /*
3016 3019 * Reload our cpu (we may have migrated). We don't have to reload
3017 3020 * the flags field here; if we were CYF_PART_BOUND on entry, we are
3018 3021 * CYF_PART_BOUND now.
3019 3022 */
3020 3023 cpu = idp->cyi_cpu;
3021 3024 c = cpu->cyp_cpu;
3022 3025
3023 3026 if (part != c->cpu_part && (flags & CYF_PART_BOUND))
3024 3027 cyclic_unbind_cpupart(id);
3025 3028
3026 3029 /*
3027 3030 * Now reload the flags field, asserting that if we are CPU bound,
3028 3031 * the CPU was specified (and likewise, if we are partition bound,
3029 3032 * the partition was specified).
3030 3033 */
3031 3034 cpu = idp->cyi_cpu;
3032 3035 c = cpu->cyp_cpu;
3033 3036 flags = cpu->cyp_cyclics[idp->cyi_ndx].cy_flags;
3034 3037 ASSERT(!(flags & CYF_CPU_BOUND) || c == d);
3035 3038 ASSERT(!(flags & CYF_PART_BOUND) || c->cpu_part == part);
3036 3039
3037 3040 if (!(flags & CYF_CPU_BOUND) && d != NULL)
3038 3041 cyclic_bind_cpu(id, d);
3039 3042
3040 3043 if (!(flags & CYF_PART_BOUND) && part != NULL)
3041 3044 cyclic_bind_cpupart(id, part);
3042 3045 }
3043 3046
3044 3047 int
3045 3048 cyclic_reprogram(cyclic_id_t id, hrtime_t expiration)
3046 3049 {
3047 3050 cyc_id_t *idp = (cyc_id_t *)id;
3048 3051 cyc_cpu_t *cpu;
3049 3052 cyc_omni_cpu_t *ocpu;
3050 3053 cyc_index_t ndx;
3051 3054
3052 3055 ASSERT(expiration > 0);
3053 3056
3054 3057 CYC_PTRACE("reprog", idp, idp->cyi_cpu);
3055 3058
3056 3059 kpreempt_disable();
3057 3060
3058 3061 /*
3059 3062 * Prevent the cyclic from moving or disappearing while we reprogram.
3060 3063 */
3061 3064 rw_enter(&idp->cyi_lock, RW_READER);
3062 3065
3063 3066 if (idp->cyi_cpu == NULL) {
3064 3067 ASSERT(curthread->t_preempt > 0);
3065 3068 cpu = CPU->cpu_cyclic;
3066 3069
3067 3070 /*
3068 3071 * For an omni cyclic, we reprogram the cyclic corresponding
3069 3072 * to the current CPU. Look for it in the list.
3070 3073 */
3071 3074 ocpu = idp->cyi_omni_list;
3072 3075 while (ocpu != NULL) {
3073 3076 if (ocpu->cyo_cpu == cpu)
3074 3077 break;
3075 3078 ocpu = ocpu->cyo_next;
3076 3079 }
3077 3080
3078 3081 if (ocpu == NULL) {
3079 3082 /*
3080 3083 * Didn't find it. This means that CPU offline
3081 3084 * must have removed it racing with us. So,
3082 3085 * nothing to do.
3083 3086 */
3084 3087 rw_exit(&idp->cyi_lock);
3085 3088
3086 3089 kpreempt_enable();
3087 3090
3088 3091 return (0);
3089 3092 }
3090 3093 ndx = ocpu->cyo_ndx;
3091 3094 } else {
3092 3095 cpu = idp->cyi_cpu;
3093 3096 ndx = idp->cyi_ndx;
3094 3097 }
3095 3098
3096 3099 if (cpu->cyp_cpu == CPU)
3097 3100 cyclic_reprogram_cyclic(cpu, ndx, expiration);
3098 3101 else
3099 3102 cyclic_reprogram_here(cpu, ndx, expiration);
3100 3103
3101 3104 /*
3102 3105 * Allow the cyclic to be moved or removed.
3103 3106 */
3104 3107 rw_exit(&idp->cyi_lock);
3105 3108
3106 3109 kpreempt_enable();
3107 3110
3108 3111 return (1);
3109 3112 }
3110 3113
3111 3114 hrtime_t
3112 3115 cyclic_getres()
3113 3116 {
3114 3117 return (cyclic_resolution);
3115 3118 }
3116 3119
3117 3120 void
3118 3121 cyclic_init(cyc_backend_t *be, hrtime_t resolution)
3119 3122 {
3120 3123 ASSERT(MUTEX_HELD(&cpu_lock));
3121 3124
3122 3125 CYC_PTRACE("init", be, resolution);
3123 3126 cyclic_resolution = resolution;
3124 3127
3125 3128 /*
3126 3129 * Copy the passed cyc_backend into the backend template. This must
3127 3130 * be done before the CPU can be configured.
3128 3131 */
3129 3132 bcopy(be, &cyclic_backend, sizeof (cyc_backend_t));
3130 3133
3131 3134 /*
3132 3135 * It's safe to look at the "CPU" pointer without disabling kernel
3133 3136 * preemption; cyclic_init() is called only during startup by the
3134 3137 * cyclic backend.
3135 3138 */
3136 3139 cyclic_configure(CPU);
3137 3140 cyclic_online(CPU);
3138 3141 }
3139 3142
3140 3143 /*
3141 3144 * It is assumed that cyclic_mp_init() is called some time after cyclic
3142 3145 * init (and therefore, after cpu0 has been initialized). We grab cpu_lock,
3143 3146 * find the already initialized CPU, and initialize every other CPU with the
3144 3147 * same backend. Finally, we register a cpu_setup function.
3145 3148 */
3146 3149 void
3147 3150 cyclic_mp_init()
3148 3151 {
3149 3152 cpu_t *c;
3150 3153
3151 3154 mutex_enter(&cpu_lock);
3152 3155
3153 3156 c = cpu_list;
3154 3157 do {
3155 3158 if (c->cpu_cyclic == NULL) {
3156 3159 cyclic_configure(c);
3157 3160 cyclic_online(c);
3158 3161 }
3159 3162 } while ((c = c->cpu_next) != cpu_list);
3160 3163
3161 3164 register_cpu_setup_func((cpu_setup_func_t *)cyclic_cpu_setup, NULL);
3162 3165 mutex_exit(&cpu_lock);
3163 3166 }
3164 3167
3165 3168 /*
3166 3169 * int cyclic_juggle(cpu_t *)
3167 3170 *
3168 3171 * Overview
3169 3172 *
3170 3173 * cyclic_juggle() juggles as many cyclics as possible away from the
3171 3174 * specified CPU; all remaining cyclics on the CPU will either be CPU-
3172 3175 * or partition-bound.
3173 3176 *
3174 3177 * Arguments and notes
3175 3178 *
3176 3179 * The only argument to cyclic_juggle() is the CPU from which cyclics
3177 3180 * should be juggled. CPU-bound cyclics are never juggled; partition-bound
3178 3181 * cyclics are only juggled if the specified CPU is in the P_NOINTR state
3179 3182 * and there exists a P_ONLINE CPU in the partition. The cyclic subsystem
3180 3183 * assures that a cyclic will never fire late or spuriously, even while
3181 3184 * being juggled.
3182 3185 *
3183 3186 * Return value
3184 3187 *
3185 3188 * cyclic_juggle() returns a non-zero value if all cyclics were able to
3186 3189 * be juggled away from the CPU, and zero if one or more cyclics could
3187 3190 * not be juggled away.
3188 3191 *
3189 3192 * Caller's context
3190 3193 *
3191 3194 * cpu_lock must be held by the caller, and the caller must not be in
3192 3195 * interrupt context. The caller may not hold any locks which are also
3193 3196 * grabbed by any cyclic handler. While cyclic_juggle() _may_ be called
3194 3197 * in any context satisfying these constraints, it _must_ be called
3195 3198 * immediately after clearing CPU_ENABLE (i.e. before dropping cpu_lock).
3196 3199 * Failure to do so could result in an assertion failure in the cyclic
3197 3200 * subsystem.
3198 3201 */
3199 3202 int
3200 3203 cyclic_juggle(cpu_t *c)
3201 3204 {
3202 3205 cyc_cpu_t *cpu = c->cpu_cyclic;
3203 3206 cyc_id_t *idp;
3204 3207 int all_juggled = 1;
3205 3208
3206 3209 CYC_PTRACE1("juggle", c);
3207 3210 ASSERT(MUTEX_HELD(&cpu_lock));
3208 3211
3209 3212 /*
3210 3213 * We'll go through each cyclic on the CPU, attempting to juggle
3211 3214 * each one elsewhere.
3212 3215 */
3213 3216 for (idp = cyclic_id_head; idp != NULL; idp = idp->cyi_next) {
3214 3217 if (idp->cyi_cpu != cpu)
3215 3218 continue;
3216 3219
3217 3220 if (cyclic_juggle_one(idp) == 0) {
3218 3221 all_juggled = 0;
3219 3222 continue;
3220 3223 }
3221 3224
3222 3225 ASSERT(idp->cyi_cpu != cpu);
3223 3226 }
3224 3227
3225 3228 return (all_juggled);
3226 3229 }
3227 3230
3228 3231 /*
3229 3232 * int cyclic_offline(cpu_t *)
3230 3233 *
3231 3234 * Overview
3232 3235 *
3233 3236 * cyclic_offline() offlines the cyclic subsystem on the specified CPU.
3234 3237 *
3235 3238 * Arguments and notes
3236 3239 *
3237 3240 * The only argument to cyclic_offline() is a CPU to offline.
3238 3241 * cyclic_offline() will attempt to juggle cyclics away from the specified
3239 3242 * CPU.
3240 3243 *
3241 3244 * Return value
3242 3245 *
3243 3246 * cyclic_offline() returns 1 if all cyclics on the CPU were juggled away
3244 3247 * and the cyclic subsystem on the CPU was successfully offlines.
3245 3248 * cyclic_offline returns 0 if some cyclics remain, blocking the cyclic
3246 3249 * offline operation. All remaining cyclics on the CPU will either be
3247 3250 * CPU- or partition-bound.
3248 3251 *
3249 3252 * See the "Arguments and notes" of cyclic_juggle(), below, for more detail
3250 3253 * on cyclic juggling.
3251 3254 *
3252 3255 * Caller's context
3253 3256 *
3254 3257 * The only caller of cyclic_offline() should be the processor management
3255 3258 * subsystem. It is expected that the caller of cyclic_offline() will
3256 3259 * offline the CPU immediately after cyclic_offline() returns success (i.e.
3257 3260 * before dropping cpu_lock). Moreover, it is expected that the caller will
3258 3261 * fail the CPU offline operation if cyclic_offline() returns failure.
3259 3262 */
3260 3263 int
3261 3264 cyclic_offline(cpu_t *c)
3262 3265 {
3263 3266 cyc_cpu_t *cpu = c->cpu_cyclic;
3264 3267 cyc_id_t *idp;
3265 3268
3266 3269 CYC_PTRACE1("offline", cpu);
3267 3270 ASSERT(MUTEX_HELD(&cpu_lock));
3268 3271
3269 3272 if (!cyclic_juggle(c))
3270 3273 return (0);
3271 3274
3272 3275 /*
3273 3276 * This CPU is headed offline; we need to now stop omnipresent
3274 3277 * cyclic firing on this CPU.
3275 3278 */
3276 3279 for (idp = cyclic_id_head; idp != NULL; idp = idp->cyi_next) {
3277 3280 if (idp->cyi_cpu != NULL)
3278 3281 continue;
3279 3282
3280 3283 /*
3281 3284 * We cannot possibly be offlining the last CPU; cyi_omni_list
3282 3285 * must be non-NULL.
3283 3286 */
3284 3287 ASSERT(idp->cyi_omni_list != NULL);
3285 3288 cyclic_omni_stop(idp, cpu);
3286 3289 }
3287 3290
3288 3291 ASSERT(cpu->cyp_state == CYS_ONLINE);
3289 3292 cpu->cyp_state = CYS_OFFLINE;
3290 3293
3291 3294 return (1);
3292 3295 }
3293 3296
3294 3297 /*
3295 3298 * void cyclic_online(cpu_t *)
3296 3299 *
3297 3300 * Overview
3298 3301 *
3299 3302 * cyclic_online() onlines a CPU previously offlined with cyclic_offline().
3300 3303 *
3301 3304 * Arguments and notes
3302 3305 *
3303 3306 * cyclic_online()'s only argument is a CPU to online. The specified
3304 3307 * CPU must have been previously offlined with cyclic_offline(). After
3305 3308 * cyclic_online() returns, the specified CPU will be eligible to execute
3306 3309 * cyclics.
3307 3310 *
3308 3311 * Return value
3309 3312 *
3310 3313 * None; cyclic_online() always succeeds.
3311 3314 *
3312 3315 * Caller's context
3313 3316 *
3314 3317 * cyclic_online() should only be called by the processor management
3315 3318 * subsystem; cpu_lock must be held.
3316 3319 */
3317 3320 void
3318 3321 cyclic_online(cpu_t *c)
3319 3322 {
3320 3323 cyc_cpu_t *cpu = c->cpu_cyclic;
3321 3324 cyc_id_t *idp;
3322 3325
3323 3326 CYC_PTRACE1("online", cpu);
3324 3327 ASSERT(c->cpu_flags & CPU_ENABLE);
3325 3328 ASSERT(MUTEX_HELD(&cpu_lock));
3326 3329 ASSERT(cpu->cyp_state == CYS_OFFLINE);
3327 3330
3328 3331 cpu->cyp_state = CYS_ONLINE;
3329 3332
3330 3333 /*
3331 3334 * Now that this CPU is open for business, we need to start firing
3332 3335 * all omnipresent cyclics on it.
3333 3336 */
3334 3337 for (idp = cyclic_id_head; idp != NULL; idp = idp->cyi_next) {
3335 3338 if (idp->cyi_cpu != NULL)
3336 3339 continue;
3337 3340
3338 3341 cyclic_omni_start(idp, cpu);
3339 3342 }
3340 3343 }
3341 3344
3342 3345 /*
3343 3346 * void cyclic_move_in(cpu_t *)
3344 3347 *
3345 3348 * Overview
3346 3349 *
3347 3350 * cyclic_move_in() is called by the CPU partition code immediately after
3348 3351 * the specified CPU has moved into a new partition.
3349 3352 *
3350 3353 * Arguments and notes
3351 3354 *
3352 3355 * The only argument to cyclic_move_in() is a CPU which has moved into a
3353 3356 * new partition. If the specified CPU is P_ONLINE, and every other
3354 3357 * CPU in the specified CPU's new partition is P_NOINTR, cyclic_move_in()
3355 3358 * will juggle all partition-bound, CPU-unbound cyclics to the specified
3356 3359 * CPU.
3357 3360 *
3358 3361 * Return value
3359 3362 *
3360 3363 * None; cyclic_move_in() always succeeds.
3361 3364 *
3362 3365 * Caller's context
3363 3366 *
3364 3367 * cyclic_move_in() should _only_ be called immediately after a CPU has
3365 3368 * moved into a new partition, with cpu_lock held. As with other calls
3366 3369 * into the cyclic subsystem, no lock may be held which is also grabbed
3367 3370 * by any cyclic handler.
3368 3371 */
3369 3372 void
3370 3373 cyclic_move_in(cpu_t *d)
3371 3374 {
3372 3375 cyc_id_t *idp;
3373 3376 cyc_cpu_t *dest = d->cpu_cyclic;
3374 3377 cyclic_t *cyclic;
3375 3378 cpupart_t *part = d->cpu_part;
3376 3379
3377 3380 CYC_PTRACE("move-in", dest, part);
3378 3381 ASSERT(MUTEX_HELD(&cpu_lock));
3379 3382
3380 3383 /*
3381 3384 * Look for CYF_PART_BOUND cyclics in the new partition. If
3382 3385 * we find one, check to see if it is currently on a CPU which has
3383 3386 * interrupts disabled. If it is (and if this CPU currently has
3384 3387 * interrupts enabled), we'll juggle those cyclics over here.
3385 3388 */
3386 3389 if (!(d->cpu_flags & CPU_ENABLE)) {
3387 3390 CYC_PTRACE1("move-in-none", dest);
3388 3391 return;
3389 3392 }
3390 3393
3391 3394 for (idp = cyclic_id_head; idp != NULL; idp = idp->cyi_next) {
3392 3395 cyc_cpu_t *cpu = idp->cyi_cpu;
3393 3396 cpu_t *c;
3394 3397
3395 3398 /*
3396 3399 * Omnipresent cyclics are exempt from juggling.
3397 3400 */
3398 3401 if (cpu == NULL)
3399 3402 continue;
3400 3403
3401 3404 c = cpu->cyp_cpu;
3402 3405
3403 3406 if (c->cpu_part != part || (c->cpu_flags & CPU_ENABLE))
3404 3407 continue;
3405 3408
3406 3409 cyclic = &cpu->cyp_cyclics[idp->cyi_ndx];
3407 3410
3408 3411 if (cyclic->cy_flags & CYF_CPU_BOUND)
3409 3412 continue;
3410 3413
3411 3414 /*
3412 3415 * We know that this cyclic is bound to its processor set
3413 3416 * (otherwise, it would not be on a CPU with interrupts
3414 3417 * disabled); juggle it to our CPU.
3415 3418 */
3416 3419 ASSERT(cyclic->cy_flags & CYF_PART_BOUND);
3417 3420 cyclic_juggle_one_to(idp, dest);
3418 3421 }
3419 3422
3420 3423 CYC_PTRACE1("move-in-done", dest);
3421 3424 }
3422 3425
3423 3426 /*
3424 3427 * int cyclic_move_out(cpu_t *)
3425 3428 *
3426 3429 * Overview
3427 3430 *
3428 3431 * cyclic_move_out() is called by the CPU partition code immediately before
3429 3432 * the specified CPU is to move out of its partition.
3430 3433 *
3431 3434 * Arguments and notes
3432 3435 *
3433 3436 * The only argument to cyclic_move_out() is a CPU which is to move out of
3434 3437 * its partition.
3435 3438 *
3436 3439 * cyclic_move_out() will attempt to juggle away all partition-bound
3437 3440 * cyclics. If the specified CPU is the last CPU in a partition with
3438 3441 * partition-bound cyclics, cyclic_move_out() will fail. If there exists
3439 3442 * a partition-bound cyclic which is CPU-bound to the specified CPU,
3440 3443 * cyclic_move_out() will fail.
3441 3444 *
3442 3445 * Note that cyclic_move_out() will _only_ attempt to juggle away
3443 3446 * partition-bound cyclics; CPU-bound cyclics which are not partition-bound
3444 3447 * and unbound cyclics are not affected by changing the partition
3445 3448 * affiliation of the CPU.
3446 3449 *
3447 3450 * Return value
3448 3451 *
3449 3452 * cyclic_move_out() returns 1 if all partition-bound cyclics on the CPU
3450 3453 * were juggled away; 0 if some cyclics remain.
3451 3454 *
3452 3455 * Caller's context
3453 3456 *
3454 3457 * cyclic_move_out() should _only_ be called immediately before a CPU has
3455 3458 * moved out of its partition, with cpu_lock held. It is expected that
3456 3459 * the caller of cyclic_move_out() will change the processor set affiliation
3457 3460 * of the specified CPU immediately after cyclic_move_out() returns
3458 3461 * success (i.e. before dropping cpu_lock). Moreover, it is expected that
3459 3462 * the caller will fail the CPU repartitioning operation if cyclic_move_out()
3460 3463 * returns failure. As with other calls into the cyclic subsystem, no lock
3461 3464 * may be held which is also grabbed by any cyclic handler.
3462 3465 */
3463 3466 int
3464 3467 cyclic_move_out(cpu_t *c)
3465 3468 {
3466 3469 cyc_id_t *idp;
3467 3470 cyc_cpu_t *cpu = c->cpu_cyclic, *dest;
3468 3471 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
3469 3472 cpupart_t *part = c->cpu_part;
3470 3473
3471 3474 CYC_PTRACE1("move-out", cpu);
3472 3475 ASSERT(MUTEX_HELD(&cpu_lock));
3473 3476
3474 3477 /*
3475 3478 * If there are any CYF_PART_BOUND cyclics on this CPU, we need
3476 3479 * to try to juggle them away.
3477 3480 */
3478 3481 for (idp = cyclic_id_head; idp != NULL; idp = idp->cyi_next) {
3479 3482
3480 3483 if (idp->cyi_cpu != cpu)
3481 3484 continue;
3482 3485
3483 3486 cyclic = &cyclics[idp->cyi_ndx];
3484 3487
3485 3488 if (!(cyclic->cy_flags & CYF_PART_BOUND))
3486 3489 continue;
3487 3490
3488 3491 dest = cyclic_pick_cpu(part, c, c, cyclic->cy_flags);
3489 3492
3490 3493 if (dest == NULL) {
3491 3494 /*
3492 3495 * We can't juggle this cyclic; we need to return
3493 3496 * failure (we won't bother trying to juggle away
3494 3497 * other cyclics).
3495 3498 */
3496 3499 CYC_PTRACE("move-out-fail", cpu, idp);
3497 3500 return (0);
3498 3501 }
3499 3502 cyclic_juggle_one_to(idp, dest);
3500 3503 }
3501 3504
3502 3505 CYC_PTRACE1("move-out-done", cpu);
3503 3506 return (1);
3504 3507 }
3505 3508
3506 3509 /*
3507 3510 * void cyclic_suspend()
3508 3511 *
3509 3512 * Overview
3510 3513 *
3511 3514 * cyclic_suspend() suspends all cyclic activity throughout the cyclic
3512 3515 * subsystem. It should be called only by subsystems which are attempting
3513 3516 * to suspend the entire system (e.g. checkpoint/resume, dynamic
3514 3517 * reconfiguration).
3515 3518 *
3516 3519 * Arguments and notes
3517 3520 *
3518 3521 * cyclic_suspend() takes no arguments. Each CPU with an active cyclic
3519 3522 * disables its backend (offline CPUs disable their backends as part of
3520 3523 * the cyclic_offline() operation), thereby disabling future CY_HIGH_LEVEL
3521 3524 * interrupts.
3522 3525 *
3523 3526 * Note that disabling CY_HIGH_LEVEL interrupts does not completely preclude
3524 3527 * cyclic handlers from being called after cyclic_suspend() returns: if a
3525 3528 * CY_LOCK_LEVEL or CY_LOW_LEVEL interrupt thread was blocked at the time
3526 3529 * of cyclic_suspend(), cyclic handlers at its level may continue to be
3527 3530 * called after the interrupt thread becomes unblocked. The
3528 3531 * post-cyclic_suspend() activity is bounded by the pend count on all
3529 3532 * cyclics at the time of cyclic_suspend(). Callers concerned with more
3530 3533 * than simply disabling future CY_HIGH_LEVEL interrupts must check for
3531 3534 * this condition.
3532 3535 *
3533 3536 * On most platforms, timestamps from gethrtime() and gethrestime() are not
3534 3537 * guaranteed to monotonically increase between cyclic_suspend() and
3535 3538 * cyclic_resume(). However, timestamps are guaranteed to monotonically
3536 3539 * increase across the entire cyclic_suspend()/cyclic_resume() operation.
3537 3540 * That is, every timestamp obtained before cyclic_suspend() will be less
3538 3541 * than every timestamp obtained after cyclic_resume().
3539 3542 *
3540 3543 * Return value
3541 3544 *
3542 3545 * None; cyclic_suspend() always succeeds.
3543 3546 *
3544 3547 * Caller's context
3545 3548 *
3546 3549 * The cyclic subsystem must be configured on every valid CPU;
3547 3550 * cyclic_suspend() may not be called during boot or during dynamic
3548 3551 * reconfiguration. Additionally, cpu_lock must be held, and the caller
3549 3552 * cannot be in high-level interrupt context. However, unlike most other
3550 3553 * cyclic entry points, cyclic_suspend() may be called with locks held
3551 3554 * which are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic
3552 3555 * handlers.
3553 3556 */
3554 3557 void
3555 3558 cyclic_suspend()
3556 3559 {
3557 3560 cpu_t *c;
3558 3561 cyc_cpu_t *cpu;
3559 3562 cyc_xcallarg_t arg;
3560 3563 cyc_backend_t *be;
3561 3564
3562 3565 CYC_PTRACE0("suspend");
3563 3566 ASSERT(MUTEX_HELD(&cpu_lock));
3564 3567 c = cpu_list;
3565 3568
3566 3569 do {
3567 3570 cpu = c->cpu_cyclic;
3568 3571 be = cpu->cyp_backend;
3569 3572 arg.cyx_cpu = cpu;
3570 3573
3571 3574 be->cyb_xcall(be->cyb_arg, c,
3572 3575 (cyc_func_t)cyclic_suspend_xcall, &arg);
3573 3576 } while ((c = c->cpu_next) != cpu_list);
3574 3577 }
3575 3578
3576 3579 /*
3577 3580 * void cyclic_resume()
3578 3581 *
3579 3582 * cyclic_resume() resumes all cyclic activity throughout the cyclic
3580 3583 * subsystem. It should be called only by system-suspending subsystems.
3581 3584 *
3582 3585 * Arguments and notes
3583 3586 *
3584 3587 * cyclic_resume() takes no arguments. Each CPU with an active cyclic
3585 3588 * reenables and reprograms its backend (offline CPUs are not reenabled).
3586 3589 * On most platforms, timestamps from gethrtime() and gethrestime() are not
3587 3590 * guaranteed to monotonically increase between cyclic_suspend() and
3588 3591 * cyclic_resume(). However, timestamps are guaranteed to monotonically
3589 3592 * increase across the entire cyclic_suspend()/cyclic_resume() operation.
3590 3593 * That is, every timestamp obtained before cyclic_suspend() will be less
3591 3594 * than every timestamp obtained after cyclic_resume().
3592 3595 *
3593 3596 * Return value
3594 3597 *
3595 3598 * None; cyclic_resume() always succeeds.
3596 3599 *
3597 3600 * Caller's context
3598 3601 *
3599 3602 * The cyclic subsystem must be configured on every valid CPU;
3600 3603 * cyclic_resume() may not be called during boot or during dynamic
3601 3604 * reconfiguration. Additionally, cpu_lock must be held, and the caller
3602 3605 * cannot be in high-level interrupt context. However, unlike most other
3603 3606 * cyclic entry points, cyclic_resume() may be called with locks held which
3604 3607 * are also acquired by CY_LOCK_LEVEL or CY_LOW_LEVEL cyclic handlers.
3605 3608 */
3606 3609 void
3607 3610 cyclic_resume()
3608 3611 {
3609 3612 cpu_t *c;
3610 3613 cyc_cpu_t *cpu;
3611 3614 cyc_xcallarg_t arg;
3612 3615 cyc_backend_t *be;
3613 3616
3614 3617 CYC_PTRACE0("resume");
3615 3618 ASSERT(MUTEX_HELD(&cpu_lock));
3616 3619
3617 3620 c = cpu_list;
3618 3621
3619 3622 do {
3620 3623 cpu = c->cpu_cyclic;
3621 3624 be = cpu->cyp_backend;
3622 3625 arg.cyx_cpu = cpu;
3623 3626
3624 3627 be->cyb_xcall(be->cyb_arg, c,
3625 3628 (cyc_func_t)cyclic_resume_xcall, &arg);
3626 3629 } while ((c = c->cpu_next) != cpu_list);
3627 3630 }
↓ open down ↓ |
2303 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX