Age | Commit message (Collapse) | Author |
|
In order to prepare to execute the code coverage scripts from the
PATH, change the logic at settings in order for it to seek for
the script in the PATH, if it doesn't contain any directories
on its filename.
Note: file search routines were moved (unchanged) on this path,
in order to avoid forward prototype declarations.
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
|
|
igt_runner converts passing test results to warn if a test outputs
stuff to stderr. It also converts results if kernel logs contain
warnings to dmesg-warn or dmesg-fail, depending on the test's own
result.
Perform this same conversion to subtests even if they contain dynamic
subtests, now that we have the capability of keeping those around.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Swati Sharma <swati2.sharma@intel.com>
Reviewed-by: Swati Sharma <swati2.sharma@intel.com>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Swati Sharma <swati2.sharma@intel.com>
|
|
In preparation for testing the rest of the options for prune_mode,
rename the current dynamic subtest result test to reflect the option
value.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Swati Sharma <swati2.sharma@intel.com>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Swati Sharma <swati2.sharma@intel.com>
|
|
Allow finer control of reporting dynamic subtests instead of
unconditionally assuming that the main subtest result and logs are
uninteresting if the subtest has dynamic subtests.
The default is still to remove subtest results when the subtest has
dynamic subtests. Other options are:
keep-subtests: Remove the dynamic subtests instead, for cases when a
stable test count is more important.
keep-all: Remove nothing.
keep-requested: Remove the results that were not directly requested to
be executed. This option is useful in cases where the test selection
is a hand-picked mix of subtests and particular dynamic subtests.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Swati Sharma <swati2.sharma@intel.com>
|
|
Instead of stopping execution on resume-init success, stop on
resume-init failure like intended.
Fixes: 4b88a9253443 ("runner: check if it has root permissions")
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Reviewed-by: Mauro Carvalho Chehab <mchehab@kernel.org>
|
|
Ensure that "-o" parameter will also cleanup the contents of the
code coverage results directory.
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
|
|
The gcc compiler has a feature that enables checking the code coverage
in runtime[1].
[1] See https://www.kernel.org/doc/html/latest/dev-tools/gcov.html
The Linux Kernel comes with an option to enable such feature:
./scripts/config -e DEBUG_FS -e GCOV_KERNEL
The driver's Makefile also needs change to enable it. For instance, in
order to enable GCOV for all DRM drivers, one would need to run:
for i in $(find drivers/gpu/drm/ -name Makefile); do
sed '1 a GCOV_PROFILE := y' -i $i
done
This patch adds support for it by:
a) Implementing a logic to cleanup the code coverage counters via sysfs;
b) Calling a script responsible for collecging code coverage data.
The implementation works with two modes:
1) It zeroes the counters, run all IGT tests and collects the code
coverage results at the end.
This implies that no tests would crash the driver, as otherwise the
results won't be collected;
This is faster, as collecting code coverage data can take several
seconds.
2) For each test, it will clean the code coverage counters, run the and
collect the results.
This is more reliable, as a Kernel crash/OOPS won't affect the
results of the previously ran tests.
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
|
|
Without root permissions, most IGT tests won't actually run, but they
would be displayed at the runner's output as if everything went fine.
In order to avoid that, check if one attempts to run IGT without root
permission. Such check can be disbled with a new command line option:
--allow-non-root
As runner_tests runs as non-root, most unit tests need to pass
--allow-non-root in order for them to not return an error.
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
|
|
Add typechecking to the min/max macros and make their locals truly
unique-ish to reduce the risk of shadowing.
v2: small bug fix, write also height coordinate on rotation
test. (jheikkil)
v3: Fix up a couple of other max/max_t instances (Ashutosh)
Signed-off-by: Juha-Pekka Heikkilä <juha-pekka.heikkila@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Chris Wilson <chris.p.wilson@intel.com>
Reviewed-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
|
|
Now that autotools is gone and you always build to a dedicated build
directory with meson, we can remove .gitignore files that only had the
purpose of ignoring built binaries in the source directories.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Arkadiusz Hiler <arek@hiler.eu>
|
|
The SIGHUP handling was incorrectly done only when log level was at
least 'normal'.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Commit 10c83dfa5ad2 ("runner: Introduce a way to stop testing without
marking tests incomplete") added graceful exiting by signaling the
runner with SIGHUP, marking the currently running test as 'notrun'
instead of 'incomplete'. The result handling didn't handle dynamic
subtests though, so the currently executing dynamic subtest was still
marked as 'incomplete'. Handle that now similarly as handling the
'abort' result.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Reviewed-by: Arkadiusz Hiler <arek@hiler.eu>
|
|
Killing igt_runner with SIGHUP will now still kill the currently
running test, but it will mark that test as being "notrun" instead of
"incomplete". This allows for external tools to interrupt the testing
without messing the results.
Incidentally, Intel CI's testing procedures occasionally falsely
determine that the machine being tested is unreachable and as its next
step, will ssh in and issue a reboot in preparation for the next round
of testing, causing igt_runner to be killed with a SIGHUP...
v2:
- Fix typo SIGUP -> SIGHUP
- Make runner print that a graceful exit will be done
- Explain the code flow regarding handling of signals to the runner process
- Use GRACEFUL_EXITCODE instead of -SIGHUP directly
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Introduced in commit 532d6e84ab7f ("lib: Process kernel taints"):
../runner/executor.c: In function ‘handle_taint’:
../runner/executor.c:324:18: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
while ((explain = igt_explain_taints(&bad))) {
^
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
A small library routine to read '/proc/sys/kernel/taints' and check for
a fatal condition. This is currently used by the runner, but is also
useful for some tests.
v2,3: function docs
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
realloc() and friends return NULL if they fail; simplify the
new_escaped_json_string() by allocating all the necessary memory
up-front and checking for a failed allocation.
new_escaped_json_string() can already return NULL since
json_oject_new_string_len() returns NULL for various undocumented error
paths, and NULL is valid input for json_object_object_add(), which this
new_escaped_json_string() is currently exclusively used with. Thus,
returning NULL when memory allocation fails should be safe.
Signed-off-by: Jeremy Cline <jcline@redhat.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
We may still be interested in results of a test even if it has tainted
the kernel. On the other hand, we need to kill the test on taint if no
other means of killing it on a jam is active.
If abort on both kernel taint or a timeout is requested, decrease all
potential timeouts significantly while the taint is detected instead of
aborting immediately. However, report the taint as the reason of the
abort if a timeout decreased by the taint expires.
v2: Fix missing show_kernel_task_state() lost on rebase conflict
resolution (Chris - thanks!)
Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
If we don't have --abort=taint active and there is a kernel taint,
test exiting normally caused the runner to inject a "this test was
killed" message to the test's output. Make sure we only inject that if
we really did kill the test, and journal the test exit correctly as
well.
Same goes for the message for exceeding disk usage limits.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arek@hiler.eu>
Cc: Lukasz Fiedorowicz <lukasz.fiedorowicz@intel.com>
Reviewed-by: Lukasz Fiedorowicz <lukasz.fiedorowicz@intel.com>
|
|
If we can't kill the (main) test process, or when the test process
exits with IGT_EXIT_ABORT, we abort the execution. Pass that
information along to the other machinery that tracks whether we
aborted, thus also getting that information to the end user in the
form of the pseudo-result igt@runner@aborted.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Cc: Lukasz Fiedorowicz <lukasz.fiedorowicz@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Cc: Lukasz Fiedorowicz <lukasz.fiedorowicz@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Previously, when a test exited with IGT_EXIT_ABORT, we did abort but
did it silently. Print a message so runner logs tell a clear message
why we didn't execute the rest of the tests.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
igt_resume is typically used in a scripted loop, with automatic
reboots and retries. It already uses separate exit codes for being
done (0), requiring a reboot and a retry (1) and requiring a retry due
to exceeding its overall time quota (2). Introduce another exit code,
127, for circumstances where execution in the given directory is
impossible and retrying would never terminate. One example case is
a kernel bug causing the filesystem to go enough bonkers to make the
runner settings files disappear.
As a drive-by improvement, also print something when that happens.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
Disk usage limit is a limit of disk space taken, per (dynamic)
subtest. If the test's output, kernel log included, exceeds this
limit, the test is killed, similarly to killing the test when the
kernel gets tainted.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
Normally runner injecting a message to the test's stdout/stderr logs
has a race condition; The test outputs have special lines (subtest
starting/ending) and accidentally injecting stuff in between would
cause funky results.
When we're killing a test because the kernel got tainted, we know
already that we're not getting a subtest ending line and we can
inject, if we make sure we have newlines printed before and after the
injection.
Having a message in the stdout of the test will aid automatic bug
filtering.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
Noticed while running some tests that adding any kind of spaces into the
name of a test run would stop igt_resume from working for said test run.
Turns out that when we parse test metadata, we accidentally use the
'%ms' specifier with fscanf() which finishes parsing strings when any
kind of whitespace is encountered.
So, fix this by using the proper %m[^\n] specifier, which dynamically
allocates it's result and doesn't stop reading the string until a
newline is encountered. Additionally, add a test for this.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
Sometimes creating the string representation fails. This usually happens
when we have a huge logs (e.g.: something was spamming the dmesg) or the
result generation was run on a very low-end system (e.g. embedded board
with 256 megs of RAM).
Sadly json-c call returns us NULL and provides no explanation
whatsoever. Let's fix a NULL pointer dereference in such cases and print
a mesage that should help people make sense out of what have just
happened.
Cc: Swati Sharma <swati2.sharma@intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
DRM logging in the kernel has switched to device based logging,
changing the pattern of logging from
[drm:function_name] Message
to
i915 0000:00:02.0: [drm] Message
Change the pattern we use with --piglit-style-dmesg to match the new
style logging, catching all warnings that contain [drm]. We're no
longer result-wise exactly piglit compatible but then again that ship
sailed long time ago with the change of introducing dmesg-level
parameter. Piglit compatibility result-wise is not a goal regardless.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
|
|
PCI devices' B/D/F numbers can have alphabets in hex. Fix the regexp
so that hex B/D/F numbers also match the regular expression.
Cc: Petri Latvala <petri.latvala@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Swathi Dhanavanthri <swathi.dhanavanthri@intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Harish Chegondi <harish.chegondi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
In a few cases, we hit a timeout where no process appears to be
deadlocked (i.e. tasks stuck in 'D' with intertwined stacks) but
everything appears to be running happily. Often, they appear to be
fighting over the shrinker, so one naturally presumes we are running low
on memory. But for tests that were designed to run with ample memory to
spare, that is a little disconcerting and I would like to know where the
memory actually went.
sysrq('m'): Will dump current memory info to your console
Sounds like that should do the trick.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Acked-by: Petri Latvala <petri.latvala@intel.com>
|
|
To help verify correct deployment, add a --version flag to igt_runner
that just prints the IGT-version text, the same tests would print.
Note that only igt_runner gained the --version flag. igt_resume and
igt_results don't do fancy flag handling, they only accept the
directory to operate as their single arg.
v2: Depend on version.h (CI)
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
Include the reason why we are dumping the task state (test timeout) in
the kmsg log prior to the task state. Hopefully this helps when reading
the dump.
v2: Use asprintf to combine the strings into one to avoid error prone
manual string handling and enjoy one single write() into the kmsg.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
One missing fdatasync() for starting a subtest.
Fixes: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/81
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Instead of repeating every single time we overflow the read from kmsg,
just once per test is enough warning.
v2: Just suppress the multiple s/underflow/overflow/ messages. Having a
buffer smaller than a single kmsg packet is unlikely.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
Now that the IGT tests have a mechanism for signaling broken testing
conditions we can stop the run on the first test that has noticed it,
and possibly has triggered that state.
Traditionally run would have continued with that test failing and the
side effects would trickle down into the other tests causing a lot of
skip/fails.
v2: extra explanations, small cleanup (Petri)
Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
to make this bit of code more readable and to reuse it in the following patch
Signed-off-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|
|
Instead of reading one record at a time between select() calls and
tainted-checks etc, use the same at-the-end dmesg dumper whenever
there's activity in /dev/kmsg. It's possible that the occasional chunk
of missing dmesg we're sometimes hitting is due to reading too slowly,
especially if there's a huge gem traceback.
Also print a clear message if we hit a log buffer underrun so we know
it.
Reference: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/79
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
If a machine is hard-hanging or otherwise rebooted at the correct
time, intermediary output files get created but nothing ever gets
written to them. That yields results that are completely empty and
hard to categorize or even sometimes detect automatically. Handle this
corner case explicitly with a custom text explaining what might have
happened to prod result analysis towards fixing the real issue instead
of wondering if test result processing is faulty.
The race for getting empty files is easier to hit than it seems. The
files get created by the runner before calling exec(), and there's
plenty of time to hit a really hard crash.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
A new config option, --per-test-timeout, sets a time a single test
cannot exceed without getting itself killed. The time resets when
starting a subtest or a dynamic subtest, so an execution with
--per-test-timeout=20 can indeed go over 20 seconds a long as it
launches a dynamic subtest within that time.
As a bonus, verbose log level from runner now also prints dynamic
subtest begin/result.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Instead of aiming for inactivity_timeout and splitting that into
suitable intervals for watchdog pinging, replace the whole logic with
one-second select() timeouts and checking if we're reaching a timeout
condition based on current time and the time passed since a particular
event, be it the last activity or the time of signaling the child
processes.
With the refactoring, we gain a couple of new features for free:
- use-watchdog now makes sense even without
inactivity-timeout. Previously use-watchdog was silently ignored if
inactivity-timeout was not set. Now, watchdogs will be used always if
configured so, effectively ensuring the device gets rebooted if
userspace dies without other timeout tracking.
- Killing tests early on kernel taint now happens even
earlier. Previously on an inactive system we possibly waited for some
tens of seconds before checking kernel taints.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
In a very rudimentary and undocumented manner, testlist files can now
have dynamic subtests specified. This feature is intended for very
special cases, and the main supported mode of operation with testlist
files is still the CI-style "run it all no matter what".
The syntax for testlist files is:
igt@binary@subtestname@dynamicsubtestname
As dynamic subtests are not easily listable, any helpers for
generating such testlists are not implemented.
If running in multiple-mode, subtests with dynamic subtests specified
will run in single-mode instead.
Closes: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/45
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Reviewed-by: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
|
|
If we're checking for taints, we kill the test as soon as we notice a
taint. Out of the box, such killing will get marked as such and yields
a 'timeout' result, which is misleading. The test didn't spend too
much time, it just did nasties.
Make sure taint-killing results in an 'incomplete' result
instead. It's still not completely truthful for the state of the
testing but closer than a 'timeout'. And stands out more in CI result
analysis.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
If the kernel is tainted, it stays tainted, so make sure the execution
monitoring still reaches the output collectors and other fd change
handlers.
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
If someone wants to execute tests without aborting when tainted, they
get all their tests just straight up killed on the first taint without
actually aborting execution. Obey their wishes and keep running.
Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
If the kernel OOPSed during the test, it is unlikely to ever complete.
Furthermore, we have the reason why it won't complete and so do not need
to burden ourselves with the full stacktrace of every process -- or at
least we have a more pressing bug to fix before worrying about the
system deadlock.
v2: Log the post-taint killing.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Petri Latvala <petri.latvala@intel.com>
|