| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit 514fddba845ed3a1b17e01e99cb3a2a52256a88a ]
Kernel should never gate the EMC clock as it causes immediate lockup, so
removing clk-gate functionality doesn't affect anything. Turning EMC clk
gate into divider allows to implement glitch-less EMC scaling, avoiding
reparenting to a backup clock.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit a4dbbceeee3e0ba670875a147237d6566de78840 ]
Fix some incorrect data in LVL2 offset and bit mask.
Fixes: e403d0057343 ("clk: tegra: MBIST work around for Tegra210")
Signed-off-by: Joseph Lo <josephl@nvidia.com>
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit 9caec6620f25b6d15646bbdb93062c872ba3b56f ]
Currently the default clock rates for the HDA and HDA2CODEC_2X clocks
are both 19.2MHz. However, the default rates for these clocks should
actually be 51MHz and 48MHz, respectively. The current clock settings
results in a distorted output during audio playback. Correct the default
clock rates for these clocks by specifying them in the clock init table
for Tegra210.
Cc: stable@vger.kernel.org
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit 845d782d91448e0fbca686bca2cc9f9c2a9ba3e7 ]
The maximum frequency supported for I2S on Tegra124 and Tegra210 is
24.576MHz (as stated in the Tegra TK1 data sheet for Tegra124 and the
Jetson TX1 module data sheet for Tegra210). However, the maximum I2S
frequency is limited to 24MHz because that is the maximum frequency of
the audio sync clock. Increase the maximum audio sync clock frequency
to 24.576MHz for Tegra124 and Tegra210 in order to support 24.576MHz
for I2S.
Update the tegra_clk_register_sync_source() function so that it does
not set the initial rate for the sync clocks and use the clock init
tables to set the initial rate instead.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit 0d34dfbf3023cf119b83f6470692c0b10c832495 ]
Full-speed and low-speed USB devices do not work with Tegra210
platforms because of incorrect PLLU/PLLU_OUT1 clock settings.
When full-speed device is connected:
[ 14.059886] usb 1-3: new full-speed USB device number 2 using tegra-xusb
[ 14.196295] usb 1-3: device descriptor read/64, error -71
[ 14.436311] usb 1-3: device descriptor read/64, error -71
[ 14.675749] usb 1-3: new full-speed USB device number 3 using tegra-xusb
[ 14.812335] usb 1-3: device descriptor read/64, error -71
[ 15.052316] usb 1-3: device descriptor read/64, error -71
[ 15.164799] usb usb1-port3: attempt power cycle
When low-speed device is connected:
[ 37.610949] usb usb1-port3: Cannot enable. Maybe the USB cable is bad?
[ 38.557376] usb usb1-port3: Cannot enable. Maybe the USB cable is bad?
[ 38.564977] usb usb1-port3: attempt power cycle
This commit fixes the issue by:
1. initializing PLLU_OUT1 before initializing XUSB_FS_SRC clock
because PLLU_OUT1 is parent of XUSB_FS_SRC.
2. changing PLLU post-divider to /2 (DIVP=1) according to Technical
Reference Manual.
Fixes: e745f992cf4b ("clk: tegra: Rework pll_u")
Signed-off-by: JC Kuo <jckuo@nvidia.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit 40db569d6769ffa3864fd1b89616b1a7323568a8 upstream.
There are wrongly set parenthesis in the code that are resulting in a
wrong configuration being programmed for PLLM. The original fix was made
by Danny Huang in the downstream kernel. The patch was tested on Nyan Big
Tegra124 chromebook, PLLM rate changing works correctly now and system
doesn't lock up after changing the PLLM rate due to EMC scaling.
Cc: <stable@vger.kernel.org>
Tested-by: Steev Klimaszewski <steev@kali.org>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
[ Upstream commit d39eca547f3ec67140a5d765a426eb157b978a59 ]
If tegra_dfll_unregister() fails then "soc" is an error pointer. We
should just return instead of dereferencing it.
Fixes: 1752c9ee23fb ("clk: tegra: dfll: Fix drvdata overwriting issue")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
'clk-tegra-sdmmc-jitter', 'clk-allwinner' and 'clk-uniphier' into clk-next
* clk-imx6-ocram:
: - i.MX6SX ocram_s clk support
clk: imx: add ocram_s clock for i.mx6sx
* clk-missing-put:
: - Add missing of_node_put()s in some i.MX clk drivers
clk: imx6sll: fix missing of_node_put()
clk: imx6ul: fix missing of_node_put()
* clk-tegra-sdmmc-jitter:
: - Tegra SDMMC clk jitter improvements with high speed signaling modes
clk: tegra: make sdmmc2 and sdmmc4 as sdmmc clocks
clk: tegra: Add sdmmc mux divider clock
clk: tegra: Refactor fractional divider calculation
clk: tegra: Fix includes required by fence_udelay()
* clk-allwinner:
clk: sunxi-ng: add A64 compatible string
dt-bindings: add compatible string for the A64 DE2 CCU
clk: sunxi-ng: r40: Export video PLLs
clk: sunxi-ng: r40: Allow setting parent rate to display related clocks
clk: sunxi-ng: r40: Add minimal rate for video PLLs
* clk-uniphier:
: - Uniphier NAND, USB3 PHY, and SPI clk support
clk: uniphier: add clock frequency support for SPI
clk: uniphier: add more USB3 PHY clocks
clk: uniphier: add NAND 200MHz clock
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These clocks have low jitter paths to certain parents. To model these
correctly, use the sdmmc mux divider clock type.
Signed-off-by: Peter De-Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Aapo Vienamo <avienamo@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a clock type to model the sdmmc switch divider clocks which have paths
to source clocks bypassing the divider (Low Jitter paths). These
are handled by selecting the lj path when the divider is 1 (ie the
rate is the parent rate), otherwise the normal path with divider
will be selected. Otherwise this clock behaves as a normal peripheral
clock.
Signed-off-by: Peter De-Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Aapo Vienamo <avienamo@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move this to a separate file so it can be used to calculate the sdmmc
clock dividers.
Signed-off-by: Peter De-Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Aapo Vienamo <avienamo@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add the missing linux/delay.h include statement for udelay() used by
fence_udelay() macro.
Signed-off-by: Aapo Vienamo <avienamo@nvidia.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| | | |
| \ | |
| \ | |
| \ | |
| \ | |
| \ | |
|\ \ \ \ \
| |_|_|_|/
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
'clk-tegra-critical' and 'clk-tegra-emc-oob' into clk-next
* clk-imx-critical:
: - Convert to CLK_IS_CRITICAL for i.MX51/53 driver
clk: imx51-imx53: Include sizes.h to silence compile errors
clk: imx51-imx53: Annotate critical clocks as CLK_IS_CRITICAL
* clk-tegra-bpmp:
: - Fix Tegra BPMP driver oops when some xlating a NULL clk
clk: tegra: bpmp: Don't crash when a clock fails to register
* clk-tegra-124:
: - Proper default configuration for vic03 and vde clks on Tegra124
clk: tegra: Make vde a child of pll_c3
clk: tegra: Make vic03 a child of pll_c3
* clk-tegra-critical:
: - Mark Tegra memory controller clks as critical
clk: tegra: Mark Memory Controller clock as critical
* clk-tegra-emc-oob:
: - Fix array bounds clamp in Tegra's emc determine_rate() op
clk: tegra: emc: Avoid out-of-bounds bug
|
| |_|_|/
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Apparently there was an attempt to avoid out-of-bounds accesses when there
is only one memory timing available, but there is a typo in the code that
neglects that attempt.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Memory Controller should be always-on. Currently the sibling EMC clock is
marked as critical, let's mark MC clock too for consistency.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The current default is to leave the VDE clock's parent at the default,
which is clk_m. However, that is not a configuration that will allow the
VDE to function. Reparent it to pll_c3 instead to make sure the hardware
can actually decode video content.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| | |
By default, the vic03 clock is a child of pll_m but that runs at 924 MHz
which is too fast for VIC. Make vic03 a child of pll_c3 by default so it
will run at a supported frequency.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
When registering clocks, we just skip any that fail to register
(leaving a NULL hole in the clock table). However, our of_xlate
function still tries to dereference each entry while looking for
the clock with the requested id, causing a crash if any clocks
failed to register. Add a check to of_xlate to skip any NULL
clocks.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The kzalloc() function has a 2-factor argument form, kcalloc(). This
patch replaces cases of:
kzalloc(a * b, gfp)
with:
kcalloc(a * b, gfp)
as well as handling cases of:
kzalloc(a * b * c, gfp)
with:
kzalloc(array3_size(a, b, c), gfp)
as it's slightly less ugly than:
kzalloc_array(array_size(a, b), c, gfp)
This does, however, attempt to ignore constant size factors like:
kzalloc(4 * 1024, gfp)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
kzalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kzalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
kzalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kzalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kzalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
- kzalloc
+ kcalloc
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
kzalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kzalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
kzalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kzalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
kzalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kzalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kzalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@
(
kzalloc(sizeof(THING) * C2, ...)
|
kzalloc(sizeof(TYPE) * C2, ...)
|
kzalloc(C1 * C2 * C3, ...)
|
kzalloc(C1 * C2, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * E2
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kzalloc
+ kcalloc
(
- E1 * E2
+ E1, E2
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
and 'clk-debugfs-simple' into clk-next
* clk-imx7d:
clk: imx7d: reset parent for mipi csi root
clk: imx7d: fix mipi dphy div parent
* clk-hisi-stub:
clk/driver/hisi: Consolidate the Kconfig for the CLOCK_STUB
* clk-mvebu:
clk: mvebu: use correct bit for 98DX3236 NAND
* clk-imx6-epit:
clk: imx6: add EPIT clock support
* clk-debugfs-simple:
clk: Return void from debug_init op
clk: remove clk_debugfs_add_file()
clk: tegra: no need to check return value of debugfs_create functions
clk: davinci: no need to check return value of debugfs_create functions
clk: bcm2835: no need to check return value of debugfs_create functions
clk: no need to check return value of debugfs_create functions
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
The return value of these functions were never checked in the end
anyway, so it is obvious this does not change any functionality :)
Cc: Peter De Schrijver <pdeschrijver@nvidia.com>
Cc: Prashant Gaikwad <pgaikwad@nvidia.com>
Cc: Michael Turquette <mturquette@baylibre.com>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: Thierry Reding <thierry.reding@gmail.com>
Cc: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
CDEV1 and CDEV2 clocks are a bit special case, their parent clock is
created by the pinctrl driver. It should be possible for clk user to
request these clocks before pinctrl driver got probed and hence user will
get an orphaned clock. That might be undesirable because user may expect
parent clock to be enabled by the child, so let's return -EPROBE_DEFER
till parent clock appears.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Parents of CDEV1/2 clocks are determined by muxing of the corresponding
pins. Pinctrl driver now provides the CDEV1/2 clock muxes and hence
CDEV1/2 clocks could have correct parents. Set CDEV1/2 parents to the
corresponding muxes to fix the parents.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Marcel Ziswiler <marcel@ziswiler.com>
Tested-by: Marcel Ziswiler <marcel@ziswiler.com>
Tested-by: Marc Dietrich <marvin24@gmx.de>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
CDEV1/CDEV2 clocks could have corresponding oscillator clock divider as
a parent. Add these dividers in order to be able to provide that parent
option.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Marcel Ziswiler <marcel@ziswiler.com>
Tested-by: Marcel Ziswiler <marcel@ziswiler.com>
Tested-by: Marc Dietrich <marvin24@gmx.de>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out latest upstream U-Boot does not configure/enable pll_u which
leaves it at some default rate of 500 kHz:
root@apalis-t30:~# cat /sys/kernel/debug/clk/clk_summary | grep pll_u
pll_u 3 3 0 500000 0
Of course this won't quite work leading to the following messages:
[ 6.559593] usb 2-1: new full-speed USB device number 2 using tegra-
ehci
[ 11.759173] usb 2-1: device descriptor read/64, error -110
[ 27.119453] usb 2-1: device descriptor read/64, error -110
[ 27.389217] usb 2-1: new full-speed USB device number 3 using tegra-
ehci
[ 32.559454] usb 2-1: device descriptor read/64, error -110
[ 47.929777] usb 2-1: device descriptor read/64, error -110
[ 48.049658] usb usb2-port1: attempt power cycle
[ 48.759475] usb 2-1: new full-speed USB device number 4 using tegra-
ehci
[ 59.349457] usb 2-1: device not accepting address 4, error -110
[ 59.509449] usb 2-1: new full-speed USB device number 5 using tegra-
ehci
[ 70.069457] usb 2-1: device not accepting address 5, error -110
[ 70.079721] usb usb2-port1: unable to enumerate USB device
Fix this by actually allowing the rate also being set from within
the Linux kernel.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
Currently VDE clock rate is determined by clock config left from
bootloader, let's not rely on it and explicitly specify the clock
rate in the CCF driver.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
| |
PLL_C_OUT_1 can't produce 216 MHz defined in the init_table. Let's
set it to 240 MHz and explicitly specify HCLK rate for consistency.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
|
| |
Machine dies if HCLK, SCLK or EMC is disabled. Hence mark these clocks
as critical.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Cc: <stable@vger.kernel.org> # v4.16
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tegra210 has a hw bug which can cause IP blocks to lock up when ungating a
domain. The reason is that the logic responsible for resetting the memory
built-in self test mode can come up in an undefined state because its
clock is gated by a second level clock gate (SLCG). Work around this by
making sure the logic will get some clock edges by ensuring the relevant
clock is enabled and temporarily override the relevant SLCGs.
Unfortunately for some IP blocks, the control bits for overriding the
SLCGs are not in CAR, but in the IP block itself. This means we need to
map a few extra register banks in the clock code.
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Hector Martin <marcan@marcan.st>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
fixup mbist
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To ensure writes to clock registers have properly propagated through the
clock control logic and state machines, we need to ensure the writes have
been posted in the registers and wait for 1us after that.
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Hector Martin <marcan@marcan.st>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This clock is needed by the memory built-in self test work around.
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Hector Martin <marcan@marcan.st>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk updates from Stephen Boyd:
"We have two changes to the core framework this time around.
The first being a large change that introduces runtime PM support to
the clk framework. Now we properly call runtime PM operations on the
device providing a clk when the clk is in use. This helps on SoCs
where the clks provided by a device need something to be powered on
before using the clks, like power domains or regulators. It also helps
power those things down when clks aren't in use.
The other core change is a devm API addition for clk providers so we
can get rid of a bunch of clk driver remove functions that are just
doing of_clk_del_provider().
Outside of the core, we have the usual addition of clk drivers and
smattering of non-critical fixes to existing drivers. The biggest diff
is support for Mediatek MT2712 and MT7622 SoCs, but those patches
really just add a bunch of data.
By the way, we're trying something new here where we build the tree up
with topic branches. We plan to work this into our workflow so that we
don't step on each other's toes, and so the fixes branch can be merged
on an as-needed basis.
Summary:
Core:
- runtime PM support for clk providers
- devm API for of_clk_add_hw_provider()
New Drivers:
- Mediatek MT2712 and MT7622
- Renesas R-Car V3M SoC
Updates:
- runtime PM support for Samsung exynos5433/exynos4412 providers
- removal of clkdev aliases on Samsung SoCs
- convert clk-gpio to use gpio descriptors
- various driver cleanups to match kernel coding style
- Amlogic Video Processing Unit VPU and VAPB clks
- sigma-delta modulation for Allwinner audio PLLs
- Allwinner A83t Display clks
- support for the second display unit clock on Renesas RZ/G1E
- suspend/resume support for Renesas R-Car Gen3 CPG/MSSR
- new clock ids for Rockchip rk3188 and rk3368 SoCs
- various 'const' markings on clk_ops structures
- RPM clk support on Qualcomm MSM8996/MSM8660 SoCs"
* tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: (137 commits)
clk: stm32h7: fix test of clock config
clk: pxa: fix building on older compilers
clk: sunxi-ng: a83t: Fix i2c buses bits
clk: ti: dra7-atl-clock: fix child-node lookups
clk: qcom: common: fix legacy board-clock registration
clk: uniphier: fix DAPLL2 clock rate of Pro5
clk: uniphier: fix parent of miodmac clock data
clk: hi3798cv200: correct parent mux clock for 'clk_sdio0_ciu'
clk: hisilicon: Delete an error message for a failed memory allocation in hisi_register_clkgate_sep()
clk: hi3660: fix incorrect uart3 clock freqency
clk: kona-setup: Delete error messages for failed memory allocations
ARC: clk: fix spelling mistake: "configurarion" -> "configuration"
clk: cdce925: remove redundant check for non-null parent_name
clk: versatile: Improve sizeof() usage
clk: versatile: Delete error messages for failed memory allocations
clk: ux500: Improve sizeof() usage
clk: ux500: Delete error messages for failed memory allocations
clk: spear: Delete error messages for failed memory allocations
clk: ti: Delete error messages for failed memory allocations
clk: mmp: Adjust checks for NULL pointers
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Below is the call trace of tegra210_init_pllu() function:
start_kernel()
-> time_init()
--> of_clk_init()
---> tegra210_clock_init()
----> tegra210_pll_init()
-----> tegra210_init_pllu()
Because the preemption is disabled in the start_kernel before calling
time_init, tegra210_init_pllu is actually in an atomic context while
it includes a readl_relaxed_poll_timeout that might sleep.
So this patch just changes this readl_relaxed_poll_timeout() to its
atomic version.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Both tegra124-dfll and clk-dfll are using platform_set_drvdata
to set drvdata of the exact same pdev while they use different
pointers for the drvdata. Once the drvdata has been overwritten
by tegra124-dfll, clk-dfll will never get its td pointer as it
expects.
Since tegra124-dfll merely needs its soc pointer in its remove
function, this patch fixes the bug by removing the overwriting
in the tegra124-dfll file and letting the tegra_dfll_unregister
return an soc pointer for it.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to comments in code and common sense, cclk_lp uses its
own divisor, not cclk_g's.
Fixes: b08e8c0ecc42 ("clk: tegra: add clock support for Tegra30")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
AHB DMA is a running on 1/2 of SCLK rate, APB DMA on 1/4. Increasing SCLK
rate results in an increased DMA transfer rate.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The APBDMA clock is defined in the common clock gates table that is used
by Tegra30+. Tegra20 can use it too, let's remove the custom definition
and use the common one.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
APBDMA represents a clock gate to the APB DMA controller, the actual
clock source for the controller is PCLK.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
AHB DMA engine presents on Tegra20/30. Add missing clock entries, so that
driver for the AHB DMA controller could be implemented.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, the APB clock is registered with the CLK_IGNORE_UNUSED flag
to prevent the clock from being disabled if unused on boot. However,
even if it is used, it still needs to be always kept enabled so that it
doesn't get inadvertently disabled when all of its children are, and so
update the flag for the APB clock to be CLK_IS_CRITICAL.
Suggested-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These structures are only passed to the functions tegra_clk_register_pll,
tegra_clk_register_pll{e/u} or tegra_periph_clk_init during the init
phase. These functions modify the structures only during the init phase
and after that the structures are never modified. Therefore, make them
__ro_after_init.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This clock was previously called sor1_src and was modelled as an input
to the sor1 module clock. However, it's really an output clock that can
be fed either from the safe, the sor1_pad_clkout or the sor1 module
clocks. sor1 itself can take input from either of the display PLLs.
The same implementation for the sor1_out clock is used on Tegra186, so
this nicely lines up both SoC generations to deal with this clock in a
uniform way.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of open-coding the same pattern repeatedly, reuse the newly
introduced tegra_clk_register_periph_data() helper that will unpack
the initialization structure.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
There is a common pattern that registers individual peripheral clocks
from an initialization table. Add a common implementation to remove the
duplication from various call sites.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Check return code in BPMP response message(s). The typical error case is
when a clock operation is attempted with an invalid clock identifier.
Also remove error print from call to clk_get_info() as the
implementation loops through the range of all possible identifiers, yet
the operation is expected to error out when the clock ID is unused.
Signed-off-by: Timo Alho <talho@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Added necessary delays in PLLU enable sequence during initialization
- Applied PLLU lock to all secondary gates (PLLU_48M and PLLU_60M were
missing).
Signed-off-by: Alex Frid <afrid@nvidia.com>
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Increased Tegra210 UTMIPLL power on delay to 20us (spec maximum is 15us).
Also remove a few empty lines to make it more clear the ACTIVE_DLY_COUNT
and ENABLE_DLY_COUNT fields.
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Switched Tegra210 PLLRE registration to common PLL ops instead of special
PLLRE ops used on previous Tegra chips. The latter ops do not follow
chip specific PLL frequency table, and do not apply chip specific rate
calculation method.
Removed unnecessary default rate setting that duplicates h/w reset
state, and is overwritten by clock initialization, anyway.
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove from Tegra210 PLLSS registration code sections that
- attempt to set PLL minimum rate (unnecessary, and dangerous if PLL
is already enabled on boot)
- apply pre-Tegra210 defaults settings
- check IDDQ setting (duplicated with Tegra210 PLLSS check defaults)
Replaced setting of reference clock with check that default oscillator
selection is not changed, and failed registration otherwise as validation
was only done with the oscillator as the reference clock.
Reordered registration, so that PLL initialization is called after
VCOmin adjustment.
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|