+What: /sys/bus/pci/drivers/.../bind
+Date: December 2003
+Contact: linux-pci@vger.kernel.org
+Description:
+ Writing a device location to this file will cause
+ the driver to attempt to bind to the device found at
+ this location. This is useful for overriding default
+ bindings. The format for the location is: DDDD:BB:DD.F.
+ That is Domain:Bus:Device.Function and is the same as
+ found in /sys/bus/pci/devices/. For example:
+ # echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/bind
+ (Note: kernels before 2.6.28 may require echo -n).
+
+What: /sys/bus/pci/drivers/.../unbind
+Date: December 2003
+Contact: linux-pci@vger.kernel.org
+Description:
+ Writing a device location to this file will cause the
+ driver to attempt to unbind from the device found at
+ this location. This may be useful when overriding default
+ bindings. The format for the location is: DDDD:BB:DD.F.
+ That is Domain:Bus:Device.Function and is the same as
+ found in /sys/bus/pci/devices/. For example:
+ # echo 0000:00:19.0 > /sys/bus/pci/drivers/foo/unbind
+ (Note: kernels before 2.6.28 may require echo -n).
+
+What: /sys/bus/pci/drivers/.../new_id
+Date: December 2003
+Contact: linux-pci@vger.kernel.org
+Description:
+ Writing a device ID to this file will attempt to
+ dynamically add a new device ID to a PCI device driver.
+ This may allow the driver to support more hardware than
+ was included in the driver's static device ID support
+ table at compile time. The format for the device ID is:
+ VVVV DDDD SVVV SDDD CCCC MMMM PPPP. That is Vendor ID,
+ Device ID, Subsystem Vendor ID, Subsystem Device ID,
+ Class, Class Mask, and Private Driver Data. The Vendor ID
+ and Device ID fields are required, the rest are optional.
+ Upon successfully adding an ID, the driver will probe
+ for the device and attempt to bind to it. For example:
+ # echo "8086 10f5" > /sys/bus/pci/drivers/foo/new_id
+
What: /sys/bus/pci/devices/.../vpd
Date: February 2008
Contact: Ben Hutchings <bhutchings@solarflare.com>
+++ /dev/null
-This README escorted the skystar2-driver rewriting procedure. It describes the
-state of the new flexcop-driver set and some internals are written down here
-too.
-
-This document hopefully describes things about the flexcop and its
-device-offsprings. Goal was to write an easy-to-write and easy-to-read set of
-drivers based on the skystar2.c and other information.
-
-Remark: flexcop-pci.c was a copy of skystar2.c, but every line has been
-touched and rewritten.
-
-History & News
-==============
- 2005-04-01 - correct USB ISOC transfers (thanks to Vadim Catana)
-
-
-
-
-General coding processing
-=========================
-
-We should proceed as follows (as long as no one complains):
-
-0) Think before start writing code!
-
-1) rewriting the skystar2.c with the help of the flexcop register descriptions
-and splitting up the files to a pci-bus-part and a flexcop-part.
-The new driver will be called b2c2-flexcop-pci.ko/b2c2-flexcop-usb.ko for the
-device-specific part and b2c2-flexcop.ko for the common flexcop-functions.
-
-2) Search for errors in the leftover of flexcop-pci.c (compare with pluto2.c
-and other pci drivers)
-
-3) make some beautification (see 'Improvements when rewriting (refactoring) is
-done')
-
-4) Testing the new driver and maybe substitute the skystar2.c with it, to reach
-a wider tester audience.
-
-5) creating an usb-bus-part using the already written flexcop code for the pci
-card.
-
-Idea: create a kernel-object for the flexcop and export all important
-functions. This option saves kernel-memory, but maybe a lot of functions have
-to be exported to kernel namespace.
-
-
-Current situation
-=================
-
-0) Done :)
-1) Done (some minor issues left)
-2) Done
-3) Not ready yet, more information is necessary
-4) next to be done (see the table below)
-5) USB driver is working (yes, there are some minor issues)
-
-What seems to be ready?
------------------------
-
-1) Rewriting
-1a) i2c is cut off from the flexcop-pci.c and seems to work
-1b) moved tuner and demod stuff from flexcop-pci.c to flexcop-tuner-fe.c
-1c) moved lnb and diseqc stuff from flexcop-pci.c to flexcop-tuner-fe.c
-1e) eeprom (reading MAC address)
-1d) sram (no dynamic sll size detection (commented out) (using default as JJ told me))
-1f) misc. register accesses for reading parameters (e.g. resetting, revision)
-1g) pid/mac filter (flexcop-hw-filter.c)
-1i) dvb-stuff initialization in flexcop.c (done)
-1h) dma stuff (now just using the size-irq, instead of all-together, to be done)
-1j) remove flexcop initialization from flexcop-pci.c completely (done)
-1l) use a well working dma IRQ method (done, see 'Known bugs and problems and TODO')
-1k) cleanup flexcop-files (remove unused EXPORT_SYMBOLs, make static from
-non-static where possible, moved code to proper places)
-
-2) Search for errors in the leftover of flexcop-pci.c (partially done)
-5a) add MAC address reading
-5c) feeding of ISOC data to the software demux (format of the isochronous data
-and speed optimization, no real error) (thanks to Vadim Catana)
-
-What to do in the near future?
---------------------------------------
-(no special order here)
-
-5) USB driver
-5b) optimize isoc-transfer (submitting/killing isoc URBs when transfer is starting)
-
-Testing changes
----------------
-
-O = item is working
-P = item is partially working
-X = item is not working
-N = item does not apply here
-<empty field> = item need to be examined
-
- | PCI | USB
-item | mt352 | nxt2002 | stv0299 | mt312 | mt352 | nxt2002 | stv0299 | mt312
--------+-------+---------+---------+-------+-------+---------+---------+-------
-1a) | O | | | | N | N | N | N
-1b) | O | | | | | | O |
-1c) | N | N | | | N | N | O |
-1d) | O | O
-1e) | O | O
-1f) | P
-1g) | O
-1h) | P |
-1i) | O | N
-1j) | O | N
-1l) | O | N
-2) | O | N
-5a) | N | O
-5b)* | N |
-5c) | N | O
-
-* - not done yet
-
-Known bugs and problems and TODO
---------------------------------
-
-1g/h/l) when pid filtering is enabled on the pci card
-
-DMA usage currently:
- The DMA is splitted in 2 equal-sized subbuffers. The Flexcop writes to first
- address and triggers an IRQ when it's full and starts writing to the second
- address. When the second address is full, the IRQ is triggered again, and
- the flexcop writes to first address again, and so on.
- The buffersize of each address is currently 640*188 bytes.
-
- Problem is, when using hw-pid-filtering and doing some low-bandwidth
- operation (like scanning) the buffers won't be filled enough to trigger
- the IRQ. That's why:
-
- When PID filtering is activated, the timer IRQ is used. Every 1.97 ms the IRQ
- is triggered. Is the current write address of DMA1 different to the one
- during the last IRQ, then the data is passed to the demuxer.
-
- There is an additional DMA-IRQ-method: packet count IRQ. This isn't
- implemented correctly yet.
-
- The solution is to disable HW PID filtering, but I don't know how the DVB
- API software demux behaves on slow systems with 45MBit/s TS.
-
-Solved bugs :)
---------------
-1g) pid-filtering (somehow pid index 4 and 5 (EMM_PID and ECM_PID) aren't
-working)
-SOLUTION: also index 0 was affected, because net_translation is done for
-these indexes by default
-
-5b) isochronous transfer does only work in the first attempt (for the Sky2PC
-USB, Air2PC is working) SOLUTION: the flexcop was going asleep and never really
-woke up again (don't know if this need fixes, see
-flexcop-fe-tuner.c:flexcop_sleep)
-
-NEWS: when the driver is loaded and unloaded and loaded again (w/o doing
-anything in the while the driver is loaded the first time), no transfers take
-place anymore.
-
-Improvements when rewriting (refactoring) is done
-=================================================
-
-- split sleeping of the flexcop (misc_204.ACPI3_sig = 1;) from lnb_control
- (enable sleeping for other demods than dvb-s)
-- add support for CableStar (stv0297 Microtune 203x/ALPS) (almost done, incompatibilities with the Nexus-CA)
-
-Debugging
----------
-- add verbose debugging to skystar2.c (dump the reg_dw_data) and compare it
- with this flexcop, this is important, because i2c is now using the
- flexcop_ibi_value union from flexcop-reg.h (do you have a better idea for
- that, please tell us so).
-
-Everything which is identical in the following table, can be put into a common
-flexcop-module.
-
- PCI USB
--------------------------------------------------------------------------------
-Different:
-Register access: accessing IO memory USB control message
-I2C bus: I2C bus of the FC USB control message
-Data transfer: DMA isochronous transfer
-EEPROM transfer: through i2c bus not clear yet
-
-Identical:
-Streaming: accessing registers
-PID Filtering: accessing registers
-Sram destinations: accessing registers
-Tuner/Demod: I2C bus
-DVB-stuff: can be written for common use
-
-Acknowledgements (just for the rewriting part)
-================
-
-Bjarne Steinsbo thought a lot in the first place of the pci part for this code
-sharing idea.
-
-Andreas Oberritter for providing a recent PCI initialization template
-(pluto2.c).
-
-Boleslaw Ciesielski for pointing out a problem with firmware loader.
-
-Vadim Catana for correcting the USB transfer.
-
-comments, critics and ideas to linux-dvb@linuxtv.org.
-How to set up the Technisat devices
-===================================
+How to set up the Technisat/B2C2 Flexcop devices
+================================================
1) Find out what device you have
================================
If the Technisat is the only TV device in your box get rid of unnecessary modules and check this one:
"Multimedia devices" => "Customise analog and hybrid tuner modules to build"
-In this directory uncheck every driver which is activated there.
+In this directory uncheck every driver which is activated there (except "Simple tuner support" for case 9 only).
Then please activate:
2a) Main module part:
a.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters"
-b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card OR
+b.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC PCI" in case of a PCI card
+OR
c.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Technisat/B2C2 Air/Sky/Cable2PC USB" in case of an USB 1.1 adapter
d.)"Multimedia devices" => "DVB/ATSC adapters" => "Technisat/B2C2 FlexcopII(b) and FlexCopIII adapters" => "Enable debug for the B2C2 FlexCop drivers"
Notice: d.) is helpful for troubleshooting
2b) Frontend module part:
-1.) Revision 2.3:
+1.) SkyStar DVB-S Revision 2.3:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink VP310/MT312/ZL10313 based"
-2.) Revision 2.6:
+2.) SkyStar DVB-S Revision 2.6:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0299 based"
-3.) Revision 2.7:
+3.) SkyStar DVB-S Revision 2.7:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Samsung S5H1420 based"
c.)"Multimedia devices" => "Customise DVB frontends" => "Integrant ITD1000 Zero IF tuner for DVB-S/DSS"
d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller"
-4.) Revision 2.8:
+4.) SkyStar DVB-S Revision 2.8:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24113/CX24128 tuner for DVB-S/DSS"
c.)"Multimedia devices" => "Customise DVB frontends" => "Conexant CX24123 based"
d.)"Multimedia devices" => "Customise DVB frontends" => "ISL6421 SEC controller"
-5.) DVB-T card:
+5.) AirStar DVB-T card:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Zarlink MT352 based"
-6.) DVB-C card:
+6.) CableStar DVB-C card:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "ST STV0297 based"
-7.) ATSC card 1st generation:
+7.) AirStar ATSC card 1st generation:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "Broadcom BCM3510"
-8.) ATSC card 2nd generation:
+8.) AirStar ATSC card 2nd generation:
a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
b.)"Multimedia devices" => "Customise DVB frontends" => "NxtWave Communications NXT2002/NXT2004 based"
-c.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based"
+c.)"Multimedia devices" => "Customise DVB frontends" => "Generic I2C PLL based tuners"
-Author: Uwe Bugla <uwe.bugla@gmx.de> December 2008
+9.) AirStar ATSC card 3rd generation:
+a.)"Multimedia devices" => "Customise DVB frontends" => "Customise the frontend modules to build"
+b.)"Multimedia devices" => "Customise DVB frontends" => "LG Electronics LGDT3302/LGDT3303 based"
+c.)"Multimedia devices" => "Customise analog and hybrid tuner modules to build" => "Simple tuner support"
+
+Author: Uwe Bugla <uwe.bugla@gmx.de> February 2009
Parameters denoted with BOOT are actually interpreted by the boot
loader, and have no meaning to the kernel directly.
Do not modify the syntax of boot loader parameters without extreme
-need or coordination with <Documentation/x86/i386/boot.txt>.
+need or coordination with <Documentation/x86/boot.txt>.
There are also arch-specific kernel-parameters not documented here.
See for example <Documentation/x86/x86_64/boot-options.txt>.
icn= [HW,ISDN]
Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]]
- ide= [HW] (E)IDE subsystem
- Format: ide=nodma or ide=doubler
+ ide-core.nodma= [HW] (E)IDE subsystem
+ Format: =0.0 to prevent dma on hda, =0.1 hdb =1.0 hdc
+ .vlb_clock .pci_clock .noflush .noprobe .nowerr .cdrom
+ .chs .ignore_cable are additional options
See Documentation/ide/ide.txt.
idebus= [HW] (E)IDE subsystem - VLB/PCI bus speed
See Documentation/fb/modedb.txt.
vga= [BOOT,X86-32] Select a particular video mode
- See Documentation/x86/i386/boot.txt and
+ See Documentation/x86/boot.txt and
Documentation/svga.txt.
Use vga=ask for menu.
This is actually a boot loader parameter; the value is
============
The Chelsio T3 ASIC based Adapters (S310, S320, S302, S304, Mezz cards, etc.
-series of products) supports iSCSI acceleration and iSCSI Direct Data Placement
+series of products) support iSCSI acceleration and iSCSI Direct Data Placement
(DDP) where the hardware handles the expensive byte touching operations, such
as CRC computation and verification, and direct DMA to the final host memory
destination:
the TCP segments onto the wire. It handles TCP retransmission if
needed.
- On receving, S3 h/w recovers the iSCSI PDU by reassembling TCP
+ On receiving, S3 h/w recovers the iSCSI PDU by reassembling TCP
segments, separating the header and data, calculating and verifying
- the digests, then forwards the header to the host. The payload data,
+ the digests, then forwarding the header to the host. The payload data,
if possible, will be directly placed into the pre-posted host DDP
buffer. Otherwise, the payload data will be sent to the host too.
sure the ip address is unique in the network.
3. edit /etc/iscsi/iscsid.conf
- The default setting for MaxRecvDataSegmentLength (131072) is too big,
- replace "node.conn[0].iscsi.MaxRecvDataSegmentLength" to be a value no
- bigger than 15360 (for example 8192):
+ The default setting for MaxRecvDataSegmentLength (131072) is too big;
+ replace with a value no bigger than 15360 (for example 8192):
node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192
ISDN SUBSYSTEM
P: Karsten Keil
-M: kkeil@suse.de
+M: isdn@linux-pingi.de
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
W: http://www.isdn4linux.de
T: git kernel.org:/pub/scm/linux/kernel/kkeil/isdn-2.6.git
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 29
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc7
NAME = Erotic Pickled Herring
# *DOCUMENTATION*
unsigned int cachetype = read_cpuid_cachetype();
unsigned int arch = cpu_architecture();
- if (arch >= CPU_ARCH_ARMv7) {
- cacheid = CACHEID_VIPT_NONALIASING;
- if ((cachetype & (3 << 14)) == 1 << 14)
- cacheid |= CACHEID_ASID_TAGGED;
- } else if (arch >= CPU_ARCH_ARMv6) {
- if (cachetype & (1 << 23))
+ if (arch >= CPU_ARCH_ARMv6) {
+ if ((cachetype & (7 << 29)) == 4 << 29) {
+ /* ARMv7 register format */
+ cacheid = CACHEID_VIPT_NONALIASING;
+ if ((cachetype & (3 << 14)) == 1 << 14)
+ cacheid |= CACHEID_ASID_TAGGED;
+ } else if (cachetype & (1 << 23))
cacheid = CACHEID_VIPT_ALIASING;
else
cacheid = CACHEID_VIPT_NONALIASING;
at91_sys_read(AT91_AIC_IPR) & at91_sys_read(AT91_AIC_IMR));
error:
- sdram_selfrefresh_disable();
target_state = PM_SUSPEND_ON;
at91_irq_resume();
at91_gpio_resume();
gpio_request(gpio + 7, "nCF_SEL");
gpio_direction_output(gpio + 7, 1);
+ /* irlml6401 sustains over 3A, switches 5V in under 8 msec */
+ setup_usb(500, 8);
+
return 0;
}
platform_add_devices(davinci_evm_devices,
ARRAY_SIZE(davinci_evm_devices));
evm_init_i2c();
-
- /* irlml6401 sustains over 3A, switches 5V in under 8 msec */
- setup_usb(500, 8);
}
static __init void davinci_evm_irq_init(void)
.rate = &commonrate,
.lpsc = DAVINCI_LPSC_GPIO,
},
+ {
+ .name = "usb",
+ .rate = &commonrate,
+ .lpsc = DAVINCI_LPSC_USB,
+ },
{
.name = "AEMIFCLK",
.rate = &commonrate,
#elif defined(CONFIG_USB_MUSB_HOST)
.mode = MUSB_HOST,
#endif
+ .clock = "usb",
.config = &musb_config,
};
#ifdef CONFIG_CPU_32v6K
clrex
#else
- strex r0, r1, [sp] @ Clear the exclusive monitor
+ sub r1, sp, #4 @ Get unused stack location
+ strex r0, r1, [r1] @ Clear the exclusive monitor
#endif
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR
u32 mask;
mask = __raw_readl(S3C64XX_EINT0MASK);
- mask |= eint_irq_to_bit(irq);
+ mask &= ~eint_irq_to_bit(irq);
__raw_writel(mask, S3C64XX_EINT0MASK);
}
and include PCI device scope covered by these DMA
remapping devices.
+config DMAR_DEFAULT_ON
+ def_bool y
+ prompt "Enable DMA Remapping Devices by default"
+ depends on DMAR
+ help
+ Selecting this option will enable a DMAR device at boot time if
+ one is found. If this option is not selected, DMAR support can
+ be enabled by passing intel_iommu=on to the kernel. It is
+ recommended you say N here while the DMAR code remains
+ experimental.
+
endmenu
endif
if (trigger == IOSAPIC_EDGE)
return -EINVAL;
- for (i = 0; i <= NR_IRQS; i++) {
+ for (i = 0; i < NR_IRQS; i++) {
info = &iosapic_intr_info[i];
if (info->trigger == trigger && info->polarity == pol &&
(info->dmode == IOSAPIC_FIXED ||
/* next, remove hash table entries for this table */
- for (index = 0; index <= UNW_HASH_SIZE; ++index) {
+ for (index = 0; index < UNW_HASH_SIZE; ++index) {
tmp = unw.cache + unw.hash[index];
if (unw.hash[index] >= UNW_CACHE_SIZE
|| tmp->ip < table->start || tmp->ip >= table->end)
select SYS_SUPPORTS_64BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM
- select CPU_CAVIUM_OCTEON
+ select SYS_HAS_CPU_CAVIUM_OCTEON
help
The Octeon simulator is software performance model of the Cavium
Octeon Processor. It supports simulating Octeon processors on x86
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_HIGHMEM
select SYS_HAS_EARLY_PRINTK
- select CPU_CAVIUM_OCTEON
+ select SYS_HAS_CPU_CAVIUM_OCTEON
select SWAP_IO_SPACE
help
This option supports all of the Octeon reference boards from Cavium
config CPU_CAVIUM_OCTEON
bool "Cavium Octeon processor"
+ depends on SYS_HAS_CPU_CAVIUM_OCTEON
select IRQ_CPU
select IRQ_CPU_OCTEON
select CPU_HAS_PREFETCH
config SYS_HAS_CPU_SB1
bool
+config SYS_HAS_CPU_CAVIUM_OCTEON
+ bool
+
#
# CPU may reorder R->R, R->W, W->R, W->W
# Reordering beyond LL and SC is handled in WEAK_REORDERING_BEYOND_LLSC
config 64BIT
bool "64-bit kernel"
depends on CPU_SUPPORTS_64BIT_KERNEL && SYS_SUPPORTS_64BIT_KERNEL
+ select HAVE_SYSCALL_WRAPPERS
help
Select this option if you want to build a 64-bit kernel.
* setup counter 1 (RTC) to tick at full speed
*/
t = 0xffffff;
- while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && t--)
+ while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_T1S) && --t)
asm volatile ("nop");
if (!t)
goto cntr_err;
au_sync();
t = 0xffffff;
- while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--)
+ while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t)
asm volatile ("nop");
if (!t)
goto cntr_err;
au_sync();
t = 0xffffff;
- while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && t--)
+ while ((au_readl(SYS_COUNTER_CNTRL) & SYS_CNTRL_C1S) && --t)
asm volatile ("nop");
if (!t)
goto cntr_err;
#ifndef __ASM_SECCOMP_H
-#include <linux/thread_info.h>
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
seq_printf(p, "%10u ", kstat_cpu(j).irqs[i]);
#endif
seq_printf(p, " %14s", irq_desc[i].chip->name);
- seq_printf(p, "-%-8s", irq_desc[i].name);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
#include <linux/module.h>
#include <linux/binfmts.h>
#include <linux/security.h>
+#include <linux/syscalls.h>
#include <linux/compat.h>
#include <linux/vfs.h>
#include <linux/ipc.h>
#define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL))
#endif
-asmlinkage unsigned long
-sys32_mmap2(unsigned long addr, unsigned long len, unsigned long prot,
- unsigned long flags, unsigned long fd, unsigned long pgoff)
+SYSCALL_DEFINE6(32_mmap2, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags, unsigned long, fd,
+ unsigned long, pgoff)
{
struct file * file = NULL;
unsigned long error;
int rlim_max;
};
-asmlinkage long sys32_truncate64(const char __user * path,
- unsigned long __dummy, int a2, int a3)
+SYSCALL_DEFINE4(32_truncate64, const char __user *, path,
+ unsigned long, __dummy, unsigned long, a2, unsigned long, a3)
{
return sys_truncate(path, merge_64(a2, a3));
}
-asmlinkage long sys32_ftruncate64(unsigned int fd, unsigned long __dummy,
- int a2, int a3)
+SYSCALL_DEFINE4(32_ftruncate64, unsigned long, fd, unsigned long, __dummy,
+ unsigned long, a2, unsigned long, a3)
{
return sys_ftruncate(fd, merge_64(a2, a3));
}
-asmlinkage int sys32_llseek(unsigned int fd, unsigned int offset_high,
- unsigned int offset_low, loff_t __user * result,
- unsigned int origin)
+SYSCALL_DEFINE5(32_llseek, unsigned long, fd, unsigned long, offset_high,
+ unsigned long, offset_low, loff_t __user *, result,
+ unsigned long, origin)
{
return sys_llseek(fd, offset_high, offset_low, result, origin);
}
lseek back to original location. They fail just like lseek does on
non-seekable files. */
-asmlinkage ssize_t sys32_pread(unsigned int fd, char __user * buf,
- size_t count, u32 unused, u64 a4, u64 a5)
+SYSCALL_DEFINE6(32_pread, unsigned long, fd, char __user *, buf, size_t, count,
+ unsigned long, unused, unsigned long, a4, unsigned long, a5)
{
return sys_pread64(fd, buf, count, merge_64(a4, a5));
}
-asmlinkage ssize_t sys32_pwrite(unsigned int fd, const char __user * buf,
- size_t count, u32 unused, u64 a4, u64 a5)
+SYSCALL_DEFINE6(32_pwrite, unsigned int, fd, const char __user *, buf,
+ size_t, count, u32, unused, u64, a4, u64, a5)
{
return sys_pwrite64(fd, buf, count, merge_64(a4, a5));
}
-asmlinkage int sys32_sched_rr_get_interval(compat_pid_t pid,
- struct compat_timespec __user *interval)
+SYSCALL_DEFINE2(32_sched_rr_get_interval, compat_pid_t, pid,
+ struct compat_timespec __user *, interval)
{
struct timespec t;
int ret;
#ifdef CONFIG_SYSVIPC
-asmlinkage long
-sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth)
+SYSCALL_DEFINE6(32_ipc, u32, call, long, first, long, second, long, third,
+ unsigned long, ptr, unsigned long, fifth)
{
int version, err;
#else
-asmlinkage long
-sys32_ipc(u32 call, int first, int second, int third, u32 ptr, u32 fifth)
+SYSCALL_DEFINE6(32_ipc, u32, call, int, first, int, second, int, third,
+ u32, ptr, u32 fifth)
{
return -ENOSYS;
}
#endif /* CONFIG_SYSVIPC */
#ifdef CONFIG_MIPS32_N32
-asmlinkage long sysn32_semctl(int semid, int semnum, int cmd, u32 arg)
+SYSCALL_DEFINE4(n32_semctl, int, semid, int, semnum, int, cmd, u32, arg)
{
/* compat_sys_semctl expects a pointer to union semun */
u32 __user *uptr = compat_alloc_user_space(sizeof(u32));
return compat_sys_semctl(semid, semnum, cmd, uptr);
}
-asmlinkage long sysn32_msgsnd(int msqid, u32 msgp, unsigned msgsz, int msgflg)
+SYSCALL_DEFINE4(n32_msgsnd, int, msqid, u32, msgp, unsigned int, msgsz,
+ int, msgflg)
{
return compat_sys_msgsnd(msqid, msgsz, msgflg, compat_ptr(msgp));
}
-asmlinkage long sysn32_msgrcv(int msqid, u32 msgp, size_t msgsz, int msgtyp,
- int msgflg)
+SYSCALL_DEFINE5(n32_msgrcv, int, msqid, u32, msgp, size_t, msgsz,
+ int, msgtyp, int, msgflg)
{
return compat_sys_msgrcv(msqid, msgsz, msgtyp, msgflg, IPC_64,
compat_ptr(msgp));
#ifdef CONFIG_SYSCTL_SYSCALL
-asmlinkage long sys32_sysctl(struct sysctl_args32 __user *args)
+SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args)
{
struct sysctl_args32 tmp;
int error;
return error;
}
+#else
+
+SYSCALL_DEFINE1(32_sysctl, struct sysctl_args32 __user *, args)
+{
+ return -ENOSYS;
+}
+
#endif /* CONFIG_SYSCTL_SYSCALL */
-asmlinkage long sys32_newuname(struct new_utsname __user * name)
+SYSCALL_DEFINE1(32_newuname, struct new_utsname __user *, name)
{
int ret = 0;
return ret;
}
-asmlinkage int sys32_personality(unsigned long personality)
+SYSCALL_DEFINE1(32_personality, unsigned long, personality)
{
int ret;
personality &= 0xffffffff;
extern asmlinkage long sys_ustat(dev_t dev, struct ustat __user * ubuf);
-asmlinkage int sys32_ustat(dev_t dev, struct ustat32 __user * ubuf32)
+SYSCALL_DEFINE2(32_ustat, dev_t, dev, struct ustat32 __user *, ubuf32)
{
int err;
struct ustat tmp;
return err;
}
-asmlinkage int sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset,
- s32 count)
+SYSCALL_DEFINE4(32_sendfile, long, out_fd, long, in_fd,
+ compat_off_t __user *, offset, s32, count)
{
mm_segment_t old_fs = get_fs();
int ret;
sys sys_swapon 2
sys sys_reboot 3
sys sys_old_readdir 3
- sys old_mmap 6 /* 4090 */
+ sys sys_mips_mmap 6 /* 4090 */
sys sys_munmap 2
sys sys_truncate 2
sys sys_ftruncate 2
sys sys_sendfile 4
sys sys_ni_syscall 0
sys sys_ni_syscall 0
- sys sys_mmap2 6 /* 4210 */
+ sys sys_mips_mmap2 6 /* 4210 */
sys sys_truncate64 4
sys sys_ftruncate64 4
sys sys_stat64 2
PTR sys_newlstat
PTR sys_poll
PTR sys_lseek
- PTR old_mmap
+ PTR sys_mips_mmap
PTR sys_mprotect /* 5010 */
PTR sys_munmap
PTR sys_brk
PTR sys_newlstat
PTR sys_poll
PTR sys_lseek
- PTR old_mmap
+ PTR sys_mips_mmap
PTR sys_mprotect /* 6010 */
PTR sys_munmap
PTR sys_brk
- PTR sys32_rt_sigaction
- PTR sys32_rt_sigprocmask
+ PTR sys_32_rt_sigaction
+ PTR sys_32_rt_sigprocmask
PTR compat_sys_ioctl /* 6015 */
PTR sys_pread64
PTR sys_pwrite64
PTR compat_sys_setitimer
PTR sys_alarm
PTR sys_getpid
- PTR sys32_sendfile
+ PTR sys_32_sendfile
PTR sys_socket /* 6040 */
PTR sys_connect
PTR sys_accept
PTR sys_exit
PTR compat_sys_wait4
PTR sys_kill /* 6060 */
- PTR sys32_newuname
+ PTR sys_32_newuname
PTR sys_semget
PTR sys_semop
- PTR sysn32_semctl
+ PTR sys_n32_semctl
PTR sys_shmdt /* 6065 */
PTR sys_msgget
- PTR sysn32_msgsnd
- PTR sysn32_msgrcv
+ PTR sys_n32_msgsnd
+ PTR sys_n32_msgrcv
PTR compat_sys_msgctl
PTR compat_sys_fcntl /* 6070 */
PTR sys_flock
PTR sys_getsid
PTR sys_capget
PTR sys_capset
- PTR sys32_rt_sigpending /* 6125 */
+ PTR sys_32_rt_sigpending /* 6125 */
PTR compat_sys_rt_sigtimedwait
- PTR sys32_rt_sigqueueinfo
+ PTR sys_32_rt_sigqueueinfo
PTR sysn32_rt_sigsuspend
PTR sys32_sigaltstack
PTR compat_sys_utime /* 6130 */
PTR sys_mknod
- PTR sys32_personality
- PTR sys32_ustat
+ PTR sys_32_personality
+ PTR sys_32_ustat
PTR compat_sys_statfs
PTR compat_sys_fstatfs /* 6135 */
PTR sys_sysfs
PTR sys_sched_getscheduler
PTR sys_sched_get_priority_max
PTR sys_sched_get_priority_min
- PTR sys32_sched_rr_get_interval /* 6145 */
+ PTR sys_32_sched_rr_get_interval /* 6145 */
PTR sys_mlock
PTR sys_munlock
PTR sys_mlockall
PTR sys_munlockall
PTR sys_vhangup /* 6150 */
PTR sys_pivot_root
- PTR sys32_sysctl
+ PTR sys_32_sysctl
PTR sys_prctl
PTR compat_sys_adjtimex
PTR compat_sys_setrlimit /* 6155 */
PTR sys_olduname
PTR sys_umask /* 4060 */
PTR sys_chroot
- PTR sys32_ustat
+ PTR sys_32_ustat
PTR sys_dup2
PTR sys_getppid
PTR sys_getpgrp /* 4065 */
PTR sys_setsid
- PTR sys32_sigaction
+ PTR sys_32_sigaction
PTR sys_sgetmask
PTR sys_ssetmask
PTR sys_setreuid /* 4070 */
PTR sys_swapon
PTR sys_reboot
PTR compat_sys_old_readdir
- PTR old_mmap /* 4090 */
+ PTR sys_mips_mmap /* 4090 */
PTR sys_munmap
PTR sys_truncate
PTR sys_ftruncate
PTR compat_sys_wait4
PTR sys_swapoff /* 4115 */
PTR compat_sys_sysinfo
- PTR sys32_ipc
+ PTR sys_32_ipc
PTR sys_fsync
PTR sys32_sigreturn
PTR sys32_clone /* 4120 */
PTR sys_setdomainname
- PTR sys32_newuname
+ PTR sys_32_newuname
PTR sys_ni_syscall /* sys_modify_ldt */
PTR compat_sys_adjtimex
PTR sys_mprotect /* 4125 */
PTR sys_fchdir
PTR sys_bdflush
PTR sys_sysfs /* 4135 */
- PTR sys32_personality
+ PTR sys_32_personality
PTR sys_ni_syscall /* for afs_syscall */
PTR sys_setfsuid
PTR sys_setfsgid
- PTR sys32_llseek /* 4140 */
+ PTR sys_32_llseek /* 4140 */
PTR compat_sys_getdents
PTR compat_sys_select
PTR sys_flock
PTR sys_ni_syscall /* 4150 */
PTR sys_getsid
PTR sys_fdatasync
- PTR sys32_sysctl
+ PTR sys_32_sysctl
PTR sys_mlock
PTR sys_munlock /* 4155 */
PTR sys_mlockall
PTR sys_sched_yield
PTR sys_sched_get_priority_max
PTR sys_sched_get_priority_min
- PTR sys32_sched_rr_get_interval /* 4165 */
+ PTR sys_32_sched_rr_get_interval /* 4165 */
PTR compat_sys_nanosleep
PTR sys_mremap
PTR sys_accept
PTR sys_getresgid
PTR sys_prctl
PTR sys32_rt_sigreturn
- PTR sys32_rt_sigaction
- PTR sys32_rt_sigprocmask /* 4195 */
- PTR sys32_rt_sigpending
+ PTR sys_32_rt_sigaction
+ PTR sys_32_rt_sigprocmask /* 4195 */
+ PTR sys_32_rt_sigpending
PTR compat_sys_rt_sigtimedwait
- PTR sys32_rt_sigqueueinfo
+ PTR sys_32_rt_sigqueueinfo
PTR sys32_rt_sigsuspend
- PTR sys32_pread /* 4200 */
- PTR sys32_pwrite
+ PTR sys_32_pread /* 4200 */
+ PTR sys_32_pwrite
PTR sys_chown
PTR sys_getcwd
PTR sys_capget
PTR sys_capset /* 4205 */
PTR sys32_sigaltstack
- PTR sys32_sendfile
+ PTR sys_32_sendfile
PTR sys_ni_syscall
PTR sys_ni_syscall
- PTR sys32_mmap2 /* 4210 */
- PTR sys32_truncate64
- PTR sys32_ftruncate64
+ PTR sys_mips_mmap2 /* 4210 */
+ PTR sys_32_truncate64
+ PTR sys_32_ftruncate64
PTR sys_newstat
PTR sys_newlstat
PTR sys_newfstat /* 4215 */
PTR compat_sys_mq_notify /* 4275 */
PTR compat_sys_mq_getsetattr
PTR sys_ni_syscall /* sys_vserver */
- PTR sys32_waitid
+ PTR sys_32_waitid
PTR sys_ni_syscall /* available, was setaltroot */
PTR sys_add_key /* 4280 */
PTR sys_request_key
#include <linux/ptrace.h>
#include <linux/unistd.h>
#include <linux/compiler.h>
+#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <asm/abi.h>
}
#ifdef CONFIG_TRAD_SIGNALS
-asmlinkage int sys_sigaction(int sig, const struct sigaction __user *act,
- struct sigaction __user *oact)
+SYSCALL_DEFINE3(sigaction, int, sig, const struct sigaction __user *, act,
+ struct sigaction __user *, oact)
{
struct k_sigaction new_ka, old_ka;
int ret;
return -ERESTARTNOHAND;
}
-asmlinkage int sys32_sigaction(int sig, const struct sigaction32 __user *act,
- struct sigaction32 __user *oact)
+SYSCALL_DEFINE3(32_sigaction, long, sig, const struct sigaction32 __user *, act,
+ struct sigaction32 __user *, oact)
{
struct k_sigaction new_ka, old_ka;
int ret;
.restart = __NR_O32_restart_syscall
};
-asmlinkage int sys32_rt_sigaction(int sig, const struct sigaction32 __user *act,
- struct sigaction32 __user *oact,
- unsigned int sigsetsize)
+SYSCALL_DEFINE4(32_rt_sigaction, int, sig,
+ const struct sigaction32 __user *, act,
+ struct sigaction32 __user *, oact, unsigned int, sigsetsize)
{
struct k_sigaction new_sa, old_sa;
int ret = -EINVAL;
return ret;
}
-asmlinkage int sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
- compat_sigset_t __user *oset, unsigned int sigsetsize)
+SYSCALL_DEFINE4(32_rt_sigprocmask, int, how, compat_sigset_t __user *, set,
+ compat_sigset_t __user *, oset, unsigned int, sigsetsize)
{
sigset_t old_set, new_set;
int ret;
return ret;
}
-asmlinkage int sys32_rt_sigpending(compat_sigset_t __user *uset,
- unsigned int sigsetsize)
+SYSCALL_DEFINE2(32_rt_sigpending, compat_sigset_t __user *, uset,
+ unsigned int, sigsetsize)
{
int ret;
sigset_t set;
return ret;
}
-asmlinkage int sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo)
+SYSCALL_DEFINE3(32_rt_sigqueueinfo, int, pid, int, sig,
+ compat_siginfo_t __user *, uinfo)
{
siginfo_t info;
int ret;
return ret;
}
-asmlinkage long
-sys32_waitid(int which, compat_pid_t pid,
- compat_siginfo_t __user *uinfo, int options,
- struct compat_rusage __user *uru)
+SYSCALL_DEFINE5(32_waitid, int, which, compat_pid_t, pid,
+ compat_siginfo_t __user *, uinfo, int, options,
+ struct compat_rusage __user *, uru)
{
siginfo_t info;
struct rusage ru;
return error;
}
-asmlinkage unsigned long
-old_mmap(unsigned long addr, unsigned long len, int prot,
- int flags, int fd, off_t offset)
+SYSCALL_DEFINE6(mips_mmap, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags, unsigned long,
+ fd, off_t, offset)
{
unsigned long result;
return result;
}
-asmlinkage unsigned long
-sys_mmap2(unsigned long addr, unsigned long len, unsigned long prot,
- unsigned long flags, unsigned long fd, unsigned long pgoff)
+SYSCALL_DEFINE6(mips_mmap2, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags, unsigned long, fd,
+ unsigned long, pgoff)
{
if (pgoff & (~PAGE_MASK >> 12))
return -EINVAL;
/*
* Compacrapability ...
*/
-asmlinkage int sys_uname(struct old_utsname __user * name)
+SYSCALL_DEFINE1(uname, struct old_utsname __user *, name)
{
if (name && !copy_to_user(name, utsname(), sizeof (*name)))
return 0;
/*
* Compacrapability ...
*/
-asmlinkage int sys_olduname(struct oldold_utsname __user * name)
+SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name)
{
int error;
return error;
}
-asmlinkage int sys_set_thread_area(unsigned long addr)
+SYSCALL_DEFINE1(set_thread_area, unsigned long, addr)
{
struct thread_info *ti = task_thread_info(current);
return 0;
}
-asmlinkage int _sys_sysmips(int cmd, long arg1, int arg2, int arg3)
+asmlinkage int _sys_sysmips(long cmd, long arg1, long arg2, long arg3)
{
switch (cmd) {
case MIPS_ATOMIC_SET:
*
* This is really horribly ugly.
*/
-asmlinkage int sys_ipc(unsigned int call, int first, int second,
- unsigned long third, void __user *ptr, long fifth)
+SYSCALL_DEFINE6(ipc, unsigned int, call, int, first, int, second,
+ unsigned long, third, void __user *, ptr, long, fifth)
{
int version, ret;
/*
* No implemented yet ...
*/
-asmlinkage int sys_cachectl(char *addr, int nbytes, int op)
+SYSCALL_DEFINE3(cachectl, char *, addr, int, nbytes, int, op)
{
return -ENOSYS;
}
#include <linux/linkage.h>
#include <linux/module.h>
#include <linux/sched.h>
+#include <linux/syscalls.h>
#include <linux/mm.h>
#include <asm/cacheflush.h>
* We could optimize the case where the cache argument is not BCACHE but
* that seems very atypical use ...
*/
-asmlinkage int sys_cacheflush(unsigned long addr,
- unsigned long bytes, unsigned int cache)
+SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes,
+ unsigned int, cache)
{
if (bytes == 0)
return 0;
compat_ulong_t __unused6;
};
+static inline int is_compat_task(void)
+{
+ return test_thread_flag(TIF_32BIT);
+}
+
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_COMPAT_H */
#ifndef _ASM_POWERPC_SECCOMP_H
#define _ASM_POWERPC_SECCOMP_H
-#ifdef __KERNEL__
-#include <linux/thread_info.h>
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg,
unsigned int flags)
{
- char *ptr = (char *) ¤t->thread.TS_FPR(reg);
- int i, ret;
+ char *ptr0 = (char *) ¤t->thread.TS_FPR(reg);
+ char *ptr1 = (char *) ¤t->thread.TS_FPR(reg+1);
+ int i, ret, sw = 0;
if (!(flags & F))
return 0;
if (reg & 1)
return 0; /* invalid form: FRS/FRT must be even */
- if (!(flags & SW)) {
- /* not byte-swapped - easy */
- if (!(flags & ST))
- ret = __copy_from_user(ptr, addr, 16);
- else
- ret = __copy_to_user(addr, ptr, 16);
- } else {
- /* each FPR value is byte-swapped separately */
- ret = 0;
- for (i = 0; i < 16; ++i) {
- if (!(flags & ST))
- ret |= __get_user(ptr[i^7], addr + i);
- else
- ret |= __put_user(ptr[i^7], addr + i);
+ if (flags & SW)
+ sw = 7;
+ ret = 0;
+ for (i = 0; i < 8; ++i) {
+ if (!(flags & ST)) {
+ ret |= __get_user(ptr0[i^sw], addr + i);
+ ret |= __get_user(ptr1[i^sw], addr + i + 8);
+ } else {
+ ret |= __put_user(ptr0[i^sw], addr + i);
+ ret |= __put_user(ptr1[i^sw], addr + i + 8);
}
}
if (ret)
72: std r8,8(r3)
beq+ 3f
addi r3,r3,16
-23: ld r9,8(r4)
.Ldo_tail:
bf cr7*4+1,1f
- rotldi r9,r9,32
+23: lwz r9,8(r4)
+ addi r4,r4,4
73: stw r9,0(r3)
addi r3,r3,4
1: bf cr7*4+2,2f
- rotldi r9,r9,16
+44: lhz r9,8(r4)
+ addi r4,r4,2
74: sth r9,0(r3)
addi r3,r3,2
2: bf cr7*4+3,3f
- rotldi r9,r9,8
+45: lbz r9,8(r4)
75: stb r9,0(r3)
3: li r3,0
blr
6: cmpwi cr1,r5,8
addi r3,r3,32
sld r9,r9,r10
- ble cr1,.Ldo_tail
+ ble cr1,7f
34: ld r0,8(r4)
srd r7,r0,r11
or r9,r7,r9
- b .Ldo_tail
+7:
+ bf cr7*4+1,1f
+ rotldi r9,r9,32
+94: stw r9,0(r3)
+ addi r3,r3,4
+1: bf cr7*4+2,2f
+ rotldi r9,r9,16
+95: sth r9,0(r3)
+ addi r3,r3,2
+2: bf cr7*4+3,3f
+ rotldi r9,r9,8
+96: stb r9,0(r3)
+3: li r3,0
+ blr
.Ldst_unaligned:
PPC_MTOCRF 0x01,r6 /* put #bytes to 8B bdry into cr7 */
121:
132:
addi r3,r3,8
-123:
134:
135:
138:
140:
141:
142:
+123:
+144:
+145:
/*
* here we have had a fault on a load and r3 points to the first
187:
188:
189:
+194:
+195:
+196:
1:
ld r6,-24(r1)
ld r5,-8(r1)
.llong 72b,172b
.llong 23b,123b
.llong 73b,173b
+ .llong 44b,144b
.llong 74b,174b
+ .llong 45b,145b
.llong 75b,175b
.llong 24b,124b
.llong 25b,125b
.llong 79b,179b
.llong 80b,180b
.llong 34b,134b
+ .llong 94b,194b
+ .llong 95b,195b
+ .llong 96b,196b
.llong 35b,135b
.llong 81b,181b
.llong 36b,136b
3: std r8,8(r3)
beq 3f
addi r3,r3,16
- ld r9,8(r4)
.Ldo_tail:
bf cr7*4+1,1f
- rotldi r9,r9,32
+ lwz r9,8(r4)
+ addi r4,r4,4
stw r9,0(r3)
addi r3,r3,4
1: bf cr7*4+2,2f
- rotldi r9,r9,16
+ lhz r9,8(r4)
+ addi r4,r4,2
sth r9,0(r3)
addi r3,r3,2
2: bf cr7*4+3,3f
- rotldi r9,r9,8
+ lbz r9,8(r4)
stb r9,0(r3)
3: ld r3,48(r1) /* return dest pointer */
blr
cmpwi cr1,r5,8
addi r3,r3,32
sld r9,r9,r10
- ble cr1,.Ldo_tail
+ ble cr1,6f
ld r0,8(r4)
srd r7,r0,r11
or r9,r7,r9
- b .Ldo_tail
+6:
+ bf cr7*4+1,1f
+ rotldi r9,r9,32
+ stw r9,0(r3)
+ addi r3,r3,4
+1: bf cr7*4+2,2f
+ rotldi r9,r9,16
+ sth r9,0(r3)
+ addi r3,r3,2
+2: bf cr7*4+3,3f
+ rotldi r9,r9,8
+ stb r9,0(r3)
+3: ld r3,48(r1) /* return dest pointer */
+ blr
.Ldst_unaligned:
PPC_MTOCRF 0x01,r6 # put #bytes to 8B bdry into cr7
{
u32 ma, pcila, pciha;
+ /* Hack warning ! The "old" PCI 2.x cell only let us configure the low
+ * 32-bit of incoming PLB addresses. The top 4 bits of the 36-bit
+ * address are actually hard wired to a value that appears to depend
+ * on the specific SoC. For example, it's 0 on 440EP and 1 on 440EPx.
+ *
+ * The trick here is we just crop those top bits and ignore them when
+ * programming the chip. That means the device-tree has to be right
+ * for the specific part used (we don't print a warning if it's wrong
+ * but on the other hand, you'll crash quickly enough), but at least
+ * this code should work whatever the hard coded value is
+ */
+ plb_addr &= 0xffffffffull;
+
+ /* Note: Due to the above hack, the test below doesn't actually test
+ * if you address is above 4G, but it tests that address and
+ * (address + size) are both contained in the same 4G
+ */
if ((plb_addr + size) > 0xffffffffull || !is_power_of_2(size) ||
size < 0x1000 || (plb_addr & (size - 1)) != 0) {
printk(KERN_WARNING "%s: Resource out of range\n",
module_init(aes_s390_init);
module_exit(aes_s390_fini);
-MODULE_ALIAS("aes");
+MODULE_ALIAS("aes-all");
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm");
MODULE_LICENSE("GPL");
#include <linux/gpio.h>
#include <linux/spi/spi.h>
#include <linux/spi/spi_gpio.h>
-#include <media/ov772x.h>
#include <media/soc_camera_platform.h>
#include <media/sh_mobile_ceu.h>
#include <video/sh_mobile_lcdc.h>
}
#ifdef CONFIG_I2C
-/* support for the old ncm03j camera */
static unsigned char camera_ncm03j_magic[] =
{
0x87, 0x00, 0x88, 0x08, 0x89, 0x01, 0x8A, 0xE8,
0x63, 0xD4, 0x64, 0xEA, 0xD6, 0x0F,
};
-static int camera_probe(void)
-{
- struct i2c_adapter *a = i2c_get_adapter(0);
- struct i2c_msg msg;
- int ret;
-
- camera_power(1);
- msg.addr = 0x6e;
- msg.buf = camera_ncm03j_magic;
- msg.len = 2;
- msg.flags = 0;
- ret = i2c_transfer(a, &msg, 1);
- camera_power(0);
-
- return ret;
-}
-
static int camera_set_capture(struct soc_camera_platform_info *info,
int enable)
{
.platform_data = &camera_info,
},
};
-
-static int __init camera_setup(void)
-{
- if (camera_probe() > 0)
- platform_device_register(&camera_device);
-
- return 0;
-}
-late_initcall(camera_setup);
-
#endif /* CONFIG_I2C */
-static int ov7725_power(struct device *dev, int mode)
-{
- camera_power(0);
- if (mode)
- camera_power(1);
-
- return 0;
-}
-
-static struct ov772x_camera_info ov7725_info = {
- .buswidth = SOCAM_DATAWIDTH_8,
- .flags = OV772X_FLAG_VFLIP | OV772X_FLAG_HFLIP,
- .link = {
- .power = ov7725_power,
- },
-};
-
static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
.flags = SOCAM_PCLK_SAMPLE_RISING | SOCAM_HSYNC_ACTIVE_HIGH |
SOCAM_VSYNC_ACTIVE_HIGH | SOCAM_MASTER | SOCAM_DATAWIDTH_8,
&ap325rxa_nor_flash_device,
&lcdc_device,
&ceu_device,
+#ifdef CONFIG_I2C
+ &camera_device,
+#endif
&nand_flash_device,
&sdcard_cn3_device,
};
{
I2C_BOARD_INFO("pcf8563", 0x51),
},
- {
- I2C_BOARD_INFO("ov772x", 0x21),
- .platform_data = &ov7725_info,
- },
};
static struct spi_board_info ap325rxa_spi_devices[] = {
#include <asm/freq.h>
#include <asm/io.h>
-const static int pll1rate[]={1,2,3,4,6,8};
-const static int pfc_divisors[]={1,2,3,4,6,8,12};
+static const int pll1rate[]={1,2,3,4,6,8};
+static const int pfc_divisors[]={1,2,3,4,6,8,12};
#define ifc_divisors pfc_divisors
#if (CONFIG_SH_CLK_MD == 0)
unsigned int __unused2;
};
+static inline int is_compat_task(void)
+{
+ return test_thread_flag(TIF_32BIT);
+}
+
#endif /* _ASM_SPARC64_COMPAT_H */
#ifndef _ASM_SECCOMP_H
-#include <linux/thread_info.h> /* already defines TIF_32BIT */
-
-#ifndef TIF_32BIT
-#error "unexpected TIF_32BIT on sparc64"
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
buf[1] = '?';
buf[2] = '?';
buf[3] = '\0';
+ return 0;
}
p = dp->controller;
prop = &p->layout;
remapping devices.
config DMAR_DEFAULT_ON
- def_bool n
+ def_bool y
prompt "Enable DMA Remapping Devices by default"
depends on DMAR
help
#include <asm/pgtable.h>
#include <asm/tlbflush.h>
+int
+is_io_mapping_possible(resource_size_t base, unsigned long size);
+
void *
iomap_atomic_prot_pfn(unsigned long pfn, enum km_type type, pgprot_t prot);
#ifndef _ASM_X86_SECCOMP_32_H
#define _ASM_X86_SECCOMP_32_H
-#include <linux/thread_info.h>
-
-#ifdef TIF_32BIT
-#error "unexpected TIF_32BIT on i386"
-#endif
-
#include <linux/unistd.h>
#define __NR_seccomp_read __NR_read
#ifndef _ASM_X86_SECCOMP_64_H
#define _ASM_X86_SECCOMP_64_H
-#include <linux/thread_info.h>
-
-#ifdef TIF_32BIT
-#error "unexpected TIF_32BIT on x86_64"
-#else
-#define TIF_32BIT TIF_IA32
-#endif
-
#include <linux/unistd.h>
#include <asm/ia32_unistd.h>
#ifdef CONFIG_X86_32
# define IS_IA32 1
#elif defined CONFIG_IA32_EMULATION
-# define IS_IA32 test_thread_flag(TIF_IA32)
+# define IS_IA32 is_compat_task()
#else
# define IS_IA32 0
#endif
/* Bitmask of CPUs present in the system - exported by i386_syms.c, used
* by scheduler but indexed physically */
-cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
+static cpumask_t voyager_phys_cpu_present_map = CPU_MASK_NONE;
/* The internal functions */
static void send_CPI(__u32 cpuset, __u8 cpi);
/* set up everything for just this CPU, we can alter
* this as we start the other CPUs later */
/* now get the CPU disposition from the extended CMOS */
- cpus_addr(phys_cpu_present_map)[0] =
+ cpus_addr(voyager_phys_cpu_present_map)[0] =
voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK);
- cpus_addr(phys_cpu_present_map)[0] |=
+ cpus_addr(voyager_phys_cpu_present_map)[0] |=
voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 1) << 8;
- cpus_addr(phys_cpu_present_map)[0] |=
+ cpus_addr(voyager_phys_cpu_present_map)[0] |=
voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK +
2) << 16;
- cpus_addr(phys_cpu_present_map)[0] |=
+ cpus_addr(voyager_phys_cpu_present_map)[0] |=
voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK +
3) << 24;
- init_cpu_possible(&phys_cpu_present_map);
- printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n",
- cpus_addr(phys_cpu_present_map)[0]);
+ init_cpu_possible(&voyager_phys_cpu_present_map);
+ printk("VOYAGER SMP: voyager_phys_cpu_present_map = 0x%lx\n",
+ cpus_addr(voyager_phys_cpu_present_map)[0]);
/* Here we set up the VIC to enable SMP */
/* enable the CPIs by writing the base vector to their register */
outb(VIC_DEFAULT_CPI_BASE, VIC_CPI_BASE_REGISTER);
/* now that the cat has probed the Voyager System Bus, sanity
* check the cpu map */
if (((voyager_quad_processors | voyager_extended_vic_processors)
- & cpus_addr(phys_cpu_present_map)[0]) !=
- cpus_addr(phys_cpu_present_map)[0]) {
+ & cpus_addr(voyager_phys_cpu_present_map)[0]) !=
+ cpus_addr(voyager_phys_cpu_present_map)[0]) {
/* should panic */
printk("\n\n***WARNING*** "
"Sanity check of CPU present map FAILED\n");
}
} else if (voyager_level == 4)
voyager_extended_vic_processors =
- cpus_addr(phys_cpu_present_map)[0];
+ cpus_addr(voyager_phys_cpu_present_map)[0];
/* this sets up the idle task to run on the current cpu */
voyager_extended_cpus = 1;
/* loop over all the extended VIC CPUs and boot them. The
* Quad CPUs must be bootstrapped by their extended VIC cpu */
for (i = 0; i < nr_cpu_ids; i++) {
- if (i == boot_cpu_id || !cpu_isset(i, phys_cpu_present_map))
+ if (i == boot_cpu_id || !cpu_isset(i, voyager_phys_cpu_present_map))
continue;
do_boot_cpu(i);
/* This udelay seems to be needed for the Quad boots
pos = start_pfn << PAGE_SHIFT;
end_pfn = ((pos + (PMD_SIZE - 1)) >> PMD_SHIFT)
<< (PMD_SHIFT - PAGE_SHIFT);
+ if (end_pfn > (end >> PAGE_SHIFT))
+ end_pfn = end >> PAGE_SHIFT;
if (start_pfn < end_pfn) {
nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
pos = end_pfn << PAGE_SHIFT;
#include <asm/pat.h>
#include <linux/module.h>
+int is_io_mapping_possible(resource_size_t base, unsigned long size)
+{
+#ifndef CONFIG_X86_PAE
+ /* There is no way to map greater than 1 << 32 address without PAE */
+ if (base + size > 0x100000000ULL)
+ return 0;
+#endif
+ return 1;
+}
+EXPORT_SYMBOL_GPL(is_io_mapping_possible);
+
/* Map 'pfn' using fixed map 'type' and protections 'prot'
*/
void *
struct list_head list;
struct kmmio_fault_page *release_next;
unsigned long page; /* location of the fault page */
+ bool old_presence; /* page presence prior to arming */
+ bool armed;
/*
* Number of times this page has been registered as a part
* of a probe. If zero, page is disarmed and this may be freed.
- * Used only by writers (RCU).
+ * Used only by writers (RCU) and post_kmmio_handler().
+ * Protected by kmmio_lock, when linked into kmmio_page_table.
*/
int count;
};
return NULL;
}
-static void set_page_present(unsigned long addr, bool present,
- unsigned int *pglevel)
+static void set_pmd_presence(pmd_t *pmd, bool present, bool *old)
+{
+ pmdval_t v = pmd_val(*pmd);
+ *old = !!(v & _PAGE_PRESENT);
+ v &= ~_PAGE_PRESENT;
+ if (present)
+ v |= _PAGE_PRESENT;
+ set_pmd(pmd, __pmd(v));
+}
+
+static void set_pte_presence(pte_t *pte, bool present, bool *old)
+{
+ pteval_t v = pte_val(*pte);
+ *old = !!(v & _PAGE_PRESENT);
+ v &= ~_PAGE_PRESENT;
+ if (present)
+ v |= _PAGE_PRESENT;
+ set_pte_atomic(pte, __pte(v));
+}
+
+static int set_page_presence(unsigned long addr, bool present, bool *old)
{
- pteval_t pteval;
- pmdval_t pmdval;
unsigned int level;
- pmd_t *pmd;
pte_t *pte = lookup_address(addr, &level);
if (!pte) {
pr_err("kmmio: no pte for page 0x%08lx\n", addr);
- return;
+ return -1;
}
- if (pglevel)
- *pglevel = level;
-
switch (level) {
case PG_LEVEL_2M:
- pmd = (pmd_t *)pte;
- pmdval = pmd_val(*pmd) & ~_PAGE_PRESENT;
- if (present)
- pmdval |= _PAGE_PRESENT;
- set_pmd(pmd, __pmd(pmdval));
+ set_pmd_presence((pmd_t *)pte, present, old);
break;
-
case PG_LEVEL_4K:
- pteval = pte_val(*pte) & ~_PAGE_PRESENT;
- if (present)
- pteval |= _PAGE_PRESENT;
- set_pte_atomic(pte, __pte(pteval));
+ set_pte_presence(pte, present, old);
break;
-
default:
pr_err("kmmio: unexpected page level 0x%x.\n", level);
- return;
+ return -1;
}
__flush_tlb_one(addr);
+ return 0;
}
-/** Mark the given page as not present. Access to it will trigger a fault. */
-static void arm_kmmio_fault_page(unsigned long page, unsigned int *pglevel)
+/*
+ * Mark the given page as not present. Access to it will trigger a fault.
+ *
+ * Struct kmmio_fault_page is protected by RCU and kmmio_lock, but the
+ * protection is ignored here. RCU read lock is assumed held, so the struct
+ * will not disappear unexpectedly. Furthermore, the caller must guarantee,
+ * that double arming the same virtual address (page) cannot occur.
+ *
+ * Double disarming on the other hand is allowed, and may occur when a fault
+ * and mmiotrace shutdown happen simultaneously.
+ */
+static int arm_kmmio_fault_page(struct kmmio_fault_page *f)
{
- set_page_present(page & PAGE_MASK, false, pglevel);
+ int ret;
+ WARN_ONCE(f->armed, KERN_ERR "kmmio page already armed.\n");
+ if (f->armed) {
+ pr_warning("kmmio double-arm: page 0x%08lx, ref %d, old %d\n",
+ f->page, f->count, f->old_presence);
+ }
+ ret = set_page_presence(f->page, false, &f->old_presence);
+ WARN_ONCE(ret < 0, KERN_ERR "kmmio arming 0x%08lx failed.\n", f->page);
+ f->armed = true;
+ return ret;
}
-/** Mark the given page as present. */
-static void disarm_kmmio_fault_page(unsigned long page, unsigned int *pglevel)
+/** Restore the given page to saved presence state. */
+static void disarm_kmmio_fault_page(struct kmmio_fault_page *f)
{
- set_page_present(page & PAGE_MASK, true, pglevel);
+ bool tmp;
+ int ret = set_page_presence(f->page, f->old_presence, &tmp);
+ WARN_ONCE(ret < 0,
+ KERN_ERR "kmmio disarming 0x%08lx failed.\n", f->page);
+ f->armed = false;
}
/*
ctx = &get_cpu_var(kmmio_ctx);
if (ctx->active) {
- disarm_kmmio_fault_page(faultpage->page, NULL);
if (addr == ctx->addr) {
/*
- * On SMP we sometimes get recursive probe hits on the
- * same address. Context is already saved, fall out.
+ * A second fault on the same page means some other
+ * condition needs handling by do_page_fault(), the
+ * page really not being present is the most common.
*/
- pr_debug("kmmio: duplicate probe hit on CPU %d, for "
- "address 0x%08lx.\n",
- smp_processor_id(), addr);
- ret = 1;
- goto no_kmmio_ctx;
- }
- /*
- * Prevent overwriting already in-flight context.
- * This should not happen, let's hope disarming at least
- * prevents a panic.
- */
- pr_emerg("kmmio: recursive probe hit on CPU %d, "
+ pr_debug("kmmio: secondary hit for 0x%08lx CPU %d.\n",
+ addr, smp_processor_id());
+
+ if (!faultpage->old_presence)
+ pr_info("kmmio: unexpected secondary hit for "
+ "address 0x%08lx on CPU %d.\n", addr,
+ smp_processor_id());
+ } else {
+ /*
+ * Prevent overwriting already in-flight context.
+ * This should not happen, let's hope disarming at
+ * least prevents a panic.
+ */
+ pr_emerg("kmmio: recursive probe hit on CPU %d, "
"for address 0x%08lx. Ignoring.\n",
smp_processor_id(), addr);
- pr_emerg("kmmio: previous hit was at 0x%08lx.\n",
- ctx->addr);
+ pr_emerg("kmmio: previous hit was at 0x%08lx.\n",
+ ctx->addr);
+ disarm_kmmio_fault_page(faultpage);
+ }
goto no_kmmio_ctx;
}
ctx->active++;
regs->flags &= ~X86_EFLAGS_IF;
/* Now we set present bit in PTE and single step. */
- disarm_kmmio_fault_page(ctx->fpage->page, NULL);
+ disarm_kmmio_fault_page(ctx->fpage);
/*
* If another cpu accesses the same page while we are stepping,
struct kmmio_context *ctx = &get_cpu_var(kmmio_ctx);
if (!ctx->active) {
- pr_debug("kmmio: spurious debug trap on CPU %d.\n",
+ pr_warning("kmmio: spurious debug trap on CPU %d.\n",
smp_processor_id());
goto out;
}
if (ctx->probe && ctx->probe->post_handler)
ctx->probe->post_handler(ctx->probe, condition, regs);
- arm_kmmio_fault_page(ctx->fpage->page, NULL);
+ /* Prevent racing against release_kmmio_fault_page(). */
+ spin_lock(&kmmio_lock);
+ if (ctx->fpage->count)
+ arm_kmmio_fault_page(ctx->fpage);
+ spin_unlock(&kmmio_lock);
regs->flags &= ~X86_EFLAGS_TF;
regs->flags |= ctx->saved_flags;
f = get_kmmio_fault_page(page);
if (f) {
if (!f->count)
- arm_kmmio_fault_page(f->page, NULL);
+ arm_kmmio_fault_page(f);
f->count++;
return 0;
}
- f = kmalloc(sizeof(*f), GFP_ATOMIC);
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
return -1;
f->count = 1;
f->page = page;
- list_add_rcu(&f->list, kmmio_page_list(f->page));
- arm_kmmio_fault_page(f->page, NULL);
+ if (arm_kmmio_fault_page(f)) {
+ kfree(f);
+ return -1;
+ }
+
+ list_add_rcu(&f->list, kmmio_page_list(f->page));
return 0;
}
f->count--;
BUG_ON(f->count < 0);
if (!f->count) {
- disarm_kmmio_fault_page(f->page, NULL);
+ disarm_kmmio_fault_page(f);
f->release_next = *release_list;
*release_list = f;
}
#include <linux/bootmem.h>
#include <linux/debugfs.h>
#include <linux/kernel.h>
+#include <linux/module.h>
#include <linux/gfp.h>
#include <linux/mm.h>
#include <linux/fs.h>
else
return pgprot_noncached(prot);
}
+EXPORT_SYMBOL_GPL(pgprot_writecombine);
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_X86_PAT)
/*
- * Written by Pekka Paalanen, 2008 <pq@iki.fi>
+ * Written by Pekka Paalanen, 2008-2009 <pq@iki.fi>
*/
#include <linux/module.h>
#include <linux/io.h>
static unsigned long mmio_address;
module_param(mmio_address, ulong, 0);
-MODULE_PARM_DESC(mmio_address, "Start address of the mapping of 16 kB.");
+MODULE_PARM_DESC(mmio_address, " Start address of the mapping of 16 kB "
+ "(or 8 MB if read_far is non-zero).");
+
+static unsigned long read_far = 0x400100;
+module_param(read_far, ulong, 0);
+MODULE_PARM_DESC(read_far, " Offset of a 32-bit read within 8 MB "
+ "(default: 0x400100).");
+
+static unsigned v16(unsigned i)
+{
+ return i * 12 + 7;
+}
+
+static unsigned v32(unsigned i)
+{
+ return i * 212371 + 13;
+}
static void do_write_test(void __iomem *p)
{
unsigned int i;
+ pr_info(MODULE_NAME ": write test.\n");
mmiotrace_printk("Write test.\n");
+
for (i = 0; i < 256; i++)
iowrite8(i, p + i);
+
for (i = 1024; i < (5 * 1024); i += 2)
- iowrite16(i * 12 + 7, p + i);
+ iowrite16(v16(i), p + i);
+
for (i = (5 * 1024); i < (16 * 1024); i += 4)
- iowrite32(i * 212371 + 13, p + i);
+ iowrite32(v32(i), p + i);
}
static void do_read_test(void __iomem *p)
{
unsigned int i;
+ unsigned errs[3] = { 0 };
+ pr_info(MODULE_NAME ": read test.\n");
mmiotrace_printk("Read test.\n");
+
for (i = 0; i < 256; i++)
- ioread8(p + i);
+ if (ioread8(p + i) != i)
+ ++errs[0];
+
for (i = 1024; i < (5 * 1024); i += 2)
- ioread16(p + i);
+ if (ioread16(p + i) != v16(i))
+ ++errs[1];
+
for (i = (5 * 1024); i < (16 * 1024); i += 4)
- ioread32(p + i);
+ if (ioread32(p + i) != v32(i))
+ ++errs[2];
+
+ mmiotrace_printk("Read errors: 8-bit %d, 16-bit %d, 32-bit %d.\n",
+ errs[0], errs[1], errs[2]);
}
-static void do_test(void)
+static void do_read_far_test(void __iomem *p)
{
- void __iomem *p = ioremap_nocache(mmio_address, 0x4000);
+ pr_info(MODULE_NAME ": read far test.\n");
+ mmiotrace_printk("Read far test.\n");
+
+ ioread32(p + read_far);
+}
+
+static void do_test(unsigned long size)
+{
+ void __iomem *p = ioremap_nocache(mmio_address, size);
if (!p) {
pr_err(MODULE_NAME ": could not ioremap, aborting.\n");
return;
mmiotrace_printk("ioremap returned %p.\n", p);
do_write_test(p);
do_read_test(p);
+ if (read_far && read_far < size - 4)
+ do_read_far_test(p);
iounmap(p);
}
static int __init init(void)
{
+ unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
+
if (mmio_address == 0) {
pr_err(MODULE_NAME ": you have to use the module argument "
"mmio_address.\n");
return -ENXIO;
}
- pr_warning(MODULE_NAME ": WARNING: mapping 16 kB @ 0x%08lx "
- "in PCI address space, and writing "
- "rubbish in there.\n", mmio_address);
- do_test();
+ pr_warning(MODULE_NAME ": WARNING: mapping %lu kB @ 0x%08lx in PCI "
+ "address space, and writing 16 kB of rubbish in there.\n",
+ size >> 10, mmio_address);
+ do_test(size);
+ pr_info(MODULE_NAME ": All done.\n");
return 0;
}
if (cpu_has_arch_perfmon) {
union cpuid10_eax eax;
eax.full = cpuid_eax(0xa);
- if (counter_width < eax.split.bit_width)
- counter_width = eax.split.bit_width;
+
+ /*
+ * For Core2 (family 6, model 15), don't reset the
+ * counter width:
+ */
+ if (!(eax.split.version_id == 0 &&
+ current_cpu_data.x86 == 6 &&
+ current_cpu_data.x86_model == 15)) {
+
+ if (counter_width < eax.split.bit_width)
+ counter_width = eax.split.bit_width;
+ }
}
/* clear all counters */
possible map and a non-dummy shared_info. */
per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
+ local_irq_disable();
+ early_boot_irqs_off();
+
xen_raw_console_write("mapping kernel into physical memory\n");
pgd = xen_setup_kernel_pagetable(pgd, xen_start_info->nr_pages);
}
}
-void blk_recalc_rq_segments(struct request *rq)
+static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
+ struct bio *bio,
+ unsigned int *seg_size_ptr)
{
- int nr_phys_segs;
unsigned int phys_size;
struct bio_vec *bv, *bvprv = NULL;
- int seg_size;
- int cluster;
- struct req_iterator iter;
- int high, highprv = 1;
- struct request_queue *q = rq->q;
+ int cluster, i, high, highprv = 1;
+ unsigned int seg_size, nr_phys_segs;
+ struct bio *fbio;
- if (!rq->bio)
- return;
+ if (!bio)
+ return 0;
+ fbio = bio;
cluster = test_bit(QUEUE_FLAG_CLUSTER, &q->queue_flags);
seg_size = 0;
phys_size = nr_phys_segs = 0;
- rq_for_each_segment(bv, rq, iter) {
- /*
- * the trick here is making sure that a high page is never
- * considered part of another segment, since that might
- * change with the bounce page.
- */
- high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
- if (high || highprv)
- goto new_segment;
- if (cluster) {
- if (seg_size + bv->bv_len > q->max_segment_size)
- goto new_segment;
- if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
- goto new_segment;
- if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
+ for_each_bio(bio) {
+ bio_for_each_segment(bv, bio, i) {
+ /*
+ * the trick here is making sure that a high page is
+ * never considered part of another segment, since that
+ * might change with the bounce page.
+ */
+ high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
+ if (high || highprv)
goto new_segment;
+ if (cluster) {
+ if (seg_size + bv->bv_len > q->max_segment_size)
+ goto new_segment;
+ if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
+ goto new_segment;
+ if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bv))
+ goto new_segment;
+
+ seg_size += bv->bv_len;
+ bvprv = bv;
+ continue;
+ }
+new_segment:
+ if (nr_phys_segs == 1 && seg_size >
+ fbio->bi_seg_front_size)
+ fbio->bi_seg_front_size = seg_size;
- seg_size += bv->bv_len;
+ nr_phys_segs++;
bvprv = bv;
- continue;
+ seg_size = bv->bv_len;
+ highprv = high;
}
-new_segment:
- if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size)
- rq->bio->bi_seg_front_size = seg_size;
-
- nr_phys_segs++;
- bvprv = bv;
- seg_size = bv->bv_len;
- highprv = high;
}
- if (nr_phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size)
+ if (seg_size_ptr)
+ *seg_size_ptr = seg_size;
+
+ return nr_phys_segs;
+}
+
+void blk_recalc_rq_segments(struct request *rq)
+{
+ unsigned int seg_size = 0, phys_segs;
+
+ phys_segs = __blk_recalc_rq_segments(rq->q, rq->bio, &seg_size);
+
+ if (phys_segs == 1 && seg_size > rq->bio->bi_seg_front_size)
rq->bio->bi_seg_front_size = seg_size;
if (seg_size > rq->biotail->bi_seg_back_size)
rq->biotail->bi_seg_back_size = seg_size;
- rq->nr_phys_segments = nr_phys_segs;
+ rq->nr_phys_segments = phys_segs;
}
void blk_recount_segments(struct request_queue *q, struct bio *bio)
{
- struct request rq;
struct bio *nxt = bio->bi_next;
- rq.q = q;
- rq.bio = rq.biotail = bio;
+
bio->bi_next = NULL;
- blk_recalc_rq_segments(&rq);
+ bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, NULL);
bio->bi_next = nxt;
- bio->bi_phys_segments = rq.nr_phys_segments;
bio->bi_flags |= (1 << BIO_SEG_VALID);
}
EXPORT_SYMBOL(blk_recount_segments);
}
#endif /* CONFIG_PROC_FS */
+/**
+ * register_blkdev - register a new block device
+ *
+ * @major: the requested major device number [1..255]. If @major=0, try to
+ * allocate any unused major number.
+ * @name: the name of the new block device as a zero terminated string
+ *
+ * The @name must be unique within the system.
+ *
+ * The return value depends on the @major input parameter.
+ * - if a major device number was requested in range [1..255] then the
+ * function returns zero on success, or a negative error code
+ * - if any unused major number was requested with @major=0 parameter
+ * then the return value is the allocated major number in range
+ * [1..255] or a negative error code otherwise
+ */
int register_blkdev(unsigned int major, const char *name)
{
struct blk_major_name **n, *p;
mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
type &= mask;
- alg = try_then_request_module(crypto_alg_lookup(name, type, mask),
- name);
+ alg = crypto_alg_lookup(name, type, mask);
+ if (!alg) {
+ char tmp[CRYPTO_MAX_ALG_NAME];
+
+ request_module(name);
+
+ if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask) &&
+ snprintf(tmp, sizeof(tmp), "%s-all", name) < sizeof(tmp))
+ request_module(tmp);
+
+ alg = crypto_alg_lookup(name, type, mask);
+ }
+
if (alg)
return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg;
#include <linux/libata.h>
#define DRV_NAME "pata_amd"
-#define DRV_VERSION "0.3.11"
+#define DRV_VERSION "0.4.1"
/**
* timing_setup - shared timing computation and load
return ata_sff_prereset(link, deadline);
}
+/**
+ * amd_cable_detect - report cable type
+ * @ap: port
+ *
+ * AMD controller/BIOS setups record the cable type in word 0x42
+ */
+
static int amd_cable_detect(struct ata_port *ap)
{
static const u32 bitmask[2] = {0x03, 0x0C};
return ATA_CBL_PATA40;
}
+/**
+ * amd_fifo_setup - set the PIO FIFO for ATA/ATAPI
+ * @ap: ATA interface
+ * @adev: ATA device
+ *
+ * Set the PCI fifo for this device according to the devices present
+ * on the bus at this point in time. We need to turn the post write buffer
+ * off for ATAPI devices as we may need to issue a word sized write to the
+ * device as the final I/O
+ */
+
+static void amd_fifo_setup(struct ata_port *ap)
+{
+ struct ata_device *adev;
+ struct pci_dev *pdev = to_pci_dev(ap->host->dev);
+ static const u8 fifobit[2] = { 0xC0, 0x30};
+ u8 fifo = fifobit[ap->port_no];
+ u8 r;
+
+
+ ata_for_each_dev(adev, &ap->link, ENABLED) {
+ if (adev->class == ATA_DEV_ATAPI)
+ fifo = 0;
+ }
+ if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411) /* FIFO is broken */
+ fifo = 0;
+
+ /* On the later chips the read prefetch bits become no-op bits */
+ pci_read_config_byte(pdev, 0x41, &r);
+ r &= ~fifobit[ap->port_no];
+ r |= fifo;
+ pci_write_config_byte(pdev, 0x41, r);
+}
+
/**
* amd33_set_piomode - set initial PIO mode data
* @ap: ATA interface
static void amd33_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
+ amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 1);
}
static void amd66_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
+ amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 2);
}
static void amd100_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
+ amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 3);
}
static void amd133_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
+ amd_fifo_setup(ap);
timing_setup(ap, adev, 0x40, adev->pio_mode, 4);
}
.set_dmamode = nv133_set_dmamode,
};
+static void amd_clear_fifo(struct pci_dev *pdev)
+{
+ u8 fifo;
+ /* Disable the FIFO, the FIFO logic will re-enable it as
+ appropriate */
+ pci_read_config_byte(pdev, 0x41, &fifo);
+ fifo &= 0x0F;
+ pci_write_config_byte(pdev, 0x41, fifo);
+}
+
static int amd_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
{
static const struct ata_port_info info[10] = {
if (type < 3)
ata_pci_bmdma_clear_simplex(pdev);
-
- /* Check for AMD7411 */
- if (type == 3)
- /* FIFO is broken */
- pci_write_config_byte(pdev, 0x41, fifo & 0x0F);
- else
- pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
-
+ if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ amd_clear_fifo(pdev);
/* Cable detection on Nvidia chips doesn't work too well,
* cache BIOS programmed UDMA mode.
*/
return rc;
if (pdev->vendor == PCI_VENDOR_ID_AMD) {
- u8 fifo;
- pci_read_config_byte(pdev, 0x41, &fifo);
- if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7411)
- /* FIFO is broken */
- pci_write_config_byte(pdev, 0x41, fifo & 0x0F);
- else
- pci_write_config_byte(pdev, 0x41, fifo | 0xF0);
+ amd_clear_fifo(pdev);
if (pdev->device == PCI_DEVICE_ID_AMD_VIPER_7409 ||
pdev->device == PCI_DEVICE_ID_AMD_COBRA_7401)
ata_pci_bmdma_clear_simplex(pdev);
}
-
ata_host_resume(host);
return 0;
}
id[83] |= 0x4400; /* Word 83 is valid and LBA48 */
id[86] |= 0x0400; /* LBA48 on */
id[ATA_ID_MAJOR_VER] |= 0x1F;
+ /* Clear the serial number because it's different each boot
+ which breaks validation on resume */
+ memset(&id[ATA_ID_SERNO], 0x20, ATA_ID_SERNO_LEN);
}
return err_mask;
}
static unsigned int pdc_data_xfer_vlb(struct ata_device *dev,
unsigned char *buf, unsigned int buflen, int rw)
{
- if (ata_id_has_dword_io(dev->id)) {
+ int slop = buflen & 3;
+ /* 32bit I/O capable *and* we need to write a whole number of dwords */
+ if (ata_id_has_dword_io(dev->id) && (slop == 0 || slop == 3)) {
struct ata_port *ap = dev->link->ap;
- int slop = buflen & 3;
unsigned long flags;
local_irq_save(flags);
struct ata_port *ap = adev->link->ap;
int slop = buflen & 3;
- if (ata_id_has_dword_io(adev->id)) {
+ if (ata_id_has_dword_io(adev->id) && (slop == 0 || slop == 3)) {
if (rw == WRITE)
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
else
writelfl(0, hc_mmio + HC_IRQ_CAUSE_OFS);
}
- if (!IS_SOC(hpriv)) {
- /* Clear any currently outstanding host interrupt conditions */
- writelfl(0, mmio + hpriv->irq_cause_ofs);
+ /* Clear any currently outstanding host interrupt conditions */
+ writelfl(0, mmio + hpriv->irq_cause_ofs);
- /* and unmask interrupt generation for host regs */
- writelfl(hpriv->unmask_all_irqs, mmio + hpriv->irq_mask_ofs);
+ /* and unmask interrupt generation for host regs */
+ writelfl(hpriv->unmask_all_irqs, mmio + hpriv->irq_mask_ofs);
- /*
- * enable only global host interrupts for now.
- * The per-port interrupts get done later as ports are set up.
- */
- mv_set_main_irq_mask(host, 0, PCI_ERR);
- }
+ /*
+ * enable only global host interrupts for now.
+ * The per-port interrupts get done later as ports are set up.
+ */
+ mv_set_main_irq_mask(host, 0, PCI_ERR);
done:
return rc;
}
schedule_timeout_uninterruptible(30*HZ);
/* Now try to get the controller to respond to a no-op */
- for (i=0; i<12; i++) {
+ for (i=0; i<30; i++) {
if (cciss_noop(pdev) == 0)
break;
- else
- printk("cciss: no-op failed%s\n", (i < 11 ? "; re-trying" : ""));
+
+ schedule_timeout_uninterruptible(HZ);
+ }
+ if (i == 30) {
+ printk(KERN_ERR "cciss: controller seems dead\n");
+ return -EBUSY;
}
}
#include <linux/hdreg.h>
#include <linux/cdrom.h>
#include <linux/module.h>
+#include <linux/scatterlist.h>
#include <xen/xenbus.h>
#include <xen/grant_table.h>
enum blkif_state connected;
int ring_ref;
struct blkif_front_ring ring;
+ struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
unsigned int evtchn, irq;
struct request_queue *rq;
struct work_struct work;
struct blkfront_info *info = req->rq_disk->private_data;
unsigned long buffer_mfn;
struct blkif_request *ring_req;
- struct req_iterator iter;
- struct bio_vec *bvec;
unsigned long id;
unsigned int fsect, lsect;
- int ref;
+ int i, ref;
grant_ref_t gref_head;
+ struct scatterlist *sg;
if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
return 1;
if (blk_barrier_rq(req))
ring_req->operation = BLKIF_OP_WRITE_BARRIER;
- ring_req->nr_segments = 0;
- rq_for_each_segment(bvec, req, iter) {
- BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST);
- buffer_mfn = pfn_to_mfn(page_to_pfn(bvec->bv_page));
- fsect = bvec->bv_offset >> 9;
- lsect = fsect + (bvec->bv_len >> 9) - 1;
+ ring_req->nr_segments = blk_rq_map_sg(req->q, req, info->sg);
+ BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);
+
+ for_each_sg(info->sg, sg, ring_req->nr_segments, i) {
+ buffer_mfn = pfn_to_mfn(page_to_pfn(sg_page(sg)));
+ fsect = sg->offset >> 9;
+ lsect = fsect + (sg->length >> 9) - 1;
/* install a grant reference. */
ref = gnttab_claim_grant_reference(&gref_head);
BUG_ON(ref == -ENOSPC);
buffer_mfn,
rq_data_dir(req) );
- info->shadow[id].frame[ring_req->nr_segments] =
- mfn_to_pfn(buffer_mfn);
-
- ring_req->seg[ring_req->nr_segments] =
+ info->shadow[id].frame[i] = mfn_to_pfn(buffer_mfn);
+ ring_req->seg[i] =
(struct blkif_request_segment) {
.gref = ref,
.first_sect = fsect,
.last_sect = lsect };
-
- ring_req->nr_segments++;
}
info->ring.req_prod_pvt++;
SHARED_RING_INIT(sring);
FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE);
+ sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST);
+
err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring));
if (err < 0) {
free_page((unsigned long)sring);
if (!ctx_pool) {
goto err;
}
- ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0);
+ ret = qmgr_request_queue(SEND_QID, NPE_QLEN_TOTAL, 0, 0,
+ "ixp_crypto:out", NULL);
if (ret)
goto err;
- ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0);
+ ret = qmgr_request_queue(RECV_QID, NPE_QLEN, 0, 0,
+ "ixp_crypto:in", NULL);
if (ret) {
qmgr_release_queue(SEND_QID);
goto err;
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michal Ludvig");
-MODULE_ALIAS("aes");
+MODULE_ALIAS("aes-all");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michal Ludvig");
-MODULE_ALIAS("sha1");
-MODULE_ALIAS("sha256");
+MODULE_ALIAS("sha1-all");
+MODULE_ALIAS("sha256-all");
MODULE_ALIAS("sha1-padlock");
MODULE_ALIAS("sha256-padlock");
static struct platform_driver iop_adma_driver = {
.probe = iop_adma_probe,
- .remove = iop_adma_remove,
+ .remove = __devexit_p(iop_adma_remove),
.driver = {
.owner = THIS_MODULE,
.name = "iop-adma",
static struct platform_driver mv_xor_driver = {
.probe = mv_xor_probe,
- .remove = mv_xor_remove,
+ .remove = __devexit_p(mv_xor_remove),
.driver = {
.owner = THIS_MODULE,
.name = MV_XOR_NAME,
dev->sigdata.lock = NULL;
master->lock.hw_lock = NULL; /* SHM removed */
master->lock.file_priv = NULL;
- wake_up_interruptible(&master->lock.lock_queue);
+ wake_up_interruptible_all(&master->lock.lock_queue);
}
break;
case _DRM_AGP:
kfree(modes);
kfree(enabled);
}
+
+/**
+ * drm_encoder_crtc_ok - can a given crtc drive a given encoder?
+ * @encoder: encoder to test
+ * @crtc: crtc to test
+ *
+ * Return false if @encoder can't be driven by @crtc, true otherwise.
+ */
+static bool drm_encoder_crtc_ok(struct drm_encoder *encoder,
+ struct drm_crtc *crtc)
+{
+ struct drm_device *dev;
+ struct drm_crtc *tmp;
+ int crtc_mask = 1;
+
+ WARN(!crtc, "checking null crtc?");
+
+ dev = crtc->dev;
+
+ list_for_each_entry(tmp, &dev->mode_config.crtc_list, head) {
+ if (tmp == crtc)
+ break;
+ crtc_mask <<= 1;
+ }
+
+ if (encoder->possible_crtcs & crtc_mask)
+ return true;
+ return false;
+}
+
+/*
+ * Check the CRTC we're going to map each output to vs. its current
+ * CRTC. If they don't match, we have to disable the output and the CRTC
+ * since the driver will have to re-route things.
+ */
+static void
+drm_crtc_prepare_encoders(struct drm_device *dev)
+{
+ struct drm_encoder_helper_funcs *encoder_funcs;
+ struct drm_encoder *encoder;
+
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+ encoder_funcs = encoder->helper_private;
+ /* Disable unused encoders */
+ if (encoder->crtc == NULL)
+ (*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF);
+ /* Disable encoders whose CRTC is about to change */
+ if (encoder_funcs->get_crtc &&
+ encoder->crtc != (*encoder_funcs->get_crtc)(encoder))
+ (*encoder_funcs->dpms)(encoder, DRM_MODE_DPMS_OFF);
+ }
+}
+
/**
* drm_crtc_set_mode - set a mode
* @crtc: CRTC to program
encoder_funcs->prepare(encoder);
}
+ drm_crtc_prepare_encoders(dev);
+
crtc_funcs->prepare(crtc);
/* Set up the DPLL and any encoders state that needs to adjust or depend
struct drm_device *dev;
struct drm_crtc **save_crtcs, *new_crtc;
struct drm_encoder **save_encoders, *new_encoder;
- struct drm_framebuffer *old_fb;
+ struct drm_framebuffer *old_fb = NULL;
bool save_enabled;
bool mode_changed = false;
bool fb_changed = false;
* and then just flip_or_move it */
if (set->crtc->fb != set->fb) {
/* If we have no fb then treat it as a full mode set */
- if (set->crtc->fb == NULL)
+ if (set->crtc->fb == NULL) {
+ DRM_DEBUG("crtc has no fb, full mode set\n");
mode_changed = true;
- else if ((set->fb->bits_per_pixel !=
+ } else if ((set->fb->bits_per_pixel !=
set->crtc->fb->bits_per_pixel) ||
set->fb->depth != set->crtc->fb->depth)
fb_changed = true;
fb_changed = true;
if (set->mode && !drm_mode_equal(set->mode, &set->crtc->mode)) {
- DRM_DEBUG("modes are different\n");
+ DRM_DEBUG("modes are different, full mode set\n");
drm_mode_debug_printmodeline(&set->crtc->mode);
drm_mode_debug_printmodeline(set->mode);
mode_changed = true;
}
if (new_encoder != connector->encoder) {
+ DRM_DEBUG("encoder changed, full mode switch\n");
mode_changed = true;
connector->encoder = new_encoder;
}
if (set->connectors[ro] == connector)
new_crtc = set->crtc;
}
+
+ /* Make sure the new CRTC will work with the encoder */
+ if (new_crtc &&
+ !drm_encoder_crtc_ok(connector->encoder, new_crtc)) {
+ ret = -EINVAL;
+ goto fail_set_mode;
+ }
if (new_crtc != connector->encoder->crtc) {
+ DRM_DEBUG("crtc changed, full mode switch\n");
mode_changed = true;
connector->encoder->crtc = new_crtc;
}
+ DRM_DEBUG("setting connector %d crtc to %p\n",
+ connector->base.id, new_crtc);
}
/* mode_set_base is not a required function */
fail_set_mode:
set->crtc->enabled = save_enabled;
+ set->crtc->fb = old_fb;
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (!connector->encoder)
DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version);
goto bad;
}
- if (edid->revision <= 0 || edid->revision > 3) {
+ if (edid->revision > 3) {
DRM_ERROR("EDID has minor version %d, which is not between 0-3\n", edid->revision);
goto bad;
}
mode->htotal = mode->hdisplay + ((pt->hblank_hi << 8) | pt->hblank_lo);
mode->vdisplay = (pt->vactive_hi << 8) | pt->vactive_lo;
- mode->vsync_start = mode->vdisplay + ((pt->vsync_offset_hi << 8) |
+ mode->vsync_start = mode->vdisplay + ((pt->vsync_offset_hi << 4) |
pt->vsync_offset_lo);
mode->vsync_end = mode->vsync_start +
- ((pt->vsync_pulse_width_hi << 8) |
+ ((pt->vsync_pulse_width_hi << 4) |
pt->vsync_pulse_width_lo);
mode->vtotal = mode->vdisplay + ((pt->vblank_hi << 8) | pt->vblank_lo);
mutex_lock(&dev->struct_mutex);
if (file_priv->is_master) {
+ struct drm_master *master = file_priv->master;
struct drm_file *temp;
list_for_each_entry(temp, &dev->filelist, lhead) {
if ((temp->master == file_priv->master) &&
temp->authenticated = 0;
}
+ /**
+ * Since the master is disappearing, so is the
+ * possibility to lock.
+ */
+
+ if (master->lock.hw_lock) {
+ if (dev->sigdata.lock == master->lock.hw_lock)
+ dev->sigdata.lock = NULL;
+ master->lock.hw_lock = NULL;
+ master->lock.file_priv = NULL;
+ wake_up_interruptible_all(&master->lock.lock_queue);
+ }
+
if (file_priv->minor->master == file_priv->master) {
/* drop the reference held my the minor */
drm_master_put(&file_priv->minor->master);
*/
void drm_vblank_put(struct drm_device *dev, int crtc)
{
+ BUG_ON (atomic_read (&dev->vblank_refcount[crtc]) == 0);
+
/* Last user schedules interrupt disable */
if (atomic_dec_and_test(&dev->vblank_refcount[crtc]))
mod_timer(&dev->vblank_disable_timer, jiffies + 5*DRM_HZ);
* so that interrupts remain enabled in the interim.
*/
if (!dev->vblank_inmodeset[crtc]) {
- dev->vblank_inmodeset[crtc] = 1;
- drm_vblank_get(dev, crtc);
+ dev->vblank_inmodeset[crtc] = 0x1;
+ if (drm_vblank_get(dev, crtc) == 0)
+ dev->vblank_inmodeset[crtc] |= 0x2;
}
}
EXPORT_SYMBOL(drm_vblank_pre_modeset);
if (dev->vblank_inmodeset[crtc]) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
dev->vblank_disable_allowed = 1;
- dev->vblank_inmodeset[crtc] = 0;
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
- drm_vblank_put(dev, crtc);
+
+ if (dev->vblank_inmodeset[crtc] & 0x2)
+ drm_vblank_put(dev, crtc);
+
+ dev->vblank_inmodeset[crtc] = 0;
}
}
EXPORT_SYMBOL(drm_vblank_post_modeset);
__set_current_state(TASK_INTERRUPTIBLE);
if (!master->lock.hw_lock) {
/* Device has been unregistered */
+ send_sig(SIGTERM, current, 0);
ret = -EINTR;
break;
}
/* Contention */
schedule();
if (signal_pending(current)) {
- ret = -ERESTARTSYS;
+ ret = -EINTR;
break;
}
}
drm_ht_remove(&master->magiclist);
- if (master->lock.hw_lock) {
- if (dev->sigdata.lock == master->lock.hw_lock)
- dev->sigdata.lock = NULL;
- master->lock.hw_lock = NULL;
- master->lock.file_priv = NULL;
- wake_up_interruptible(&master->lock.lock_queue);
- }
-
drm_free(master, sizeof(*master), DRM_MEM_DRIVER);
}
dev_priv->hws_map.flags = 0;
dev_priv->hws_map.mtrr = 0;
- drm_core_ioremap(&dev_priv->hws_map, dev);
+ drm_core_ioremap_wc(&dev_priv->hws_map, dev);
if (dev_priv->hws_map.handle == NULL) {
i915_dma_cleanup(dev);
dev_priv->status_gfx_addr = 0;
dev_priv->mm.gtt_mapping =
io_mapping_create_wc(dev->agp->base,
dev->agp->agp_info.aper_size * 1024*1024);
+ if (dev_priv->mm.gtt_mapping == NULL) {
+ ret = -EIO;
+ goto out_rmmap;
+ }
+
/* Set up a WC MTRR for non-PAT systems. This is more common than
* one would think, because the kernel disables PAT on first
* generation Core chips because WC PAT gets overridden by a UC
if (!I915_NEED_GFX_HWS(dev)) {
ret = i915_init_phys_hws(dev);
if (ret != 0)
- goto out_rmmap;
+ goto out_iomapfree;
}
/* On the 945G/GM, the chipset reports the MSI capability on the
return 0;
+out_iomapfree:
+ io_mapping_free(dev_priv->mm.gtt_mapping);
out_rmmap:
iounmap(dev_priv->regs);
free_priv:
user_data = (char __user *) (uintptr_t) args->data_ptr;
obj_addr = obj_priv->phys_obj->handle->vaddr + args->offset;
- DRM_ERROR("obj_addr %p, %lld\n", obj_addr, args->size);
+ DRM_DEBUG("obj_addr %p, %lld\n", obj_addr, args->size);
ret = copy_from_user(obj_addr, user_data, args->size);
if (ret)
return -EFAULT;
drm_i915_irq_emit_t *emit = data;
int result;
- RING_LOCK_TEST_WITH_RETURN(dev, file_priv);
-
if (!dev_priv) {
DRM_ERROR("called with no initialization\n");
return -EINVAL;
}
+
+ RING_LOCK_TEST_WITH_RETURN(dev, file_priv);
+
mutex_lock(&dev->struct_mutex);
result = i915_emit_irq(dev);
mutex_unlock(&dev->struct_mutex);
panel_fixed_mode->clock = dvo_timing->clock * 10;
panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED;
+ /* Some VBTs have bogus h/vtotal values */
+ if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal)
+ panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1;
+ if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal)
+ panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end + 1;
+
drm_mode_set_name(panel_fixed_mode);
dev_priv->vbt_mode = panel_fixed_mode;
return false;
}
-#define INTELPllInvalid(s) do { DRM_DEBUG(s); return false; } while (0)
+#define INTELPllInvalid(s) do { /* DRM_DEBUG(s); */ return false; } while (0)
/**
* Returns whether the given set of divisors are valid for a given refclk with
* the given connectors.
return 0;
}
-static void __devexit
+static void
mv64xxx_i2c_unmap_regs(struct mv64xxx_i2c_data *drv_data)
{
if (drv_data->reg_base) {
static struct platform_driver mv64xxx_i2c_driver = {
.probe = mv64xxx_i2c_probe,
- .remove = mv64xxx_i2c_remove,
+ .remove = __devexit_p(mv64xxx_i2c_remove),
.driver = {
.owner = THIS_MODULE,
.name = MV64XXX_I2C_CTLR_NAME,
SMART parameters from disk drives.
To compile this driver as a module, choose M here: the
- module will be called ide.
+ module will be called ide-core.ko.
For further information, please read <file:Documentation/ide/ide.txt>.
* Check for broken FIFO support.
*/
if (dev->vendor == PCI_VENDOR_ID_AMD &&
- dev->vendor == PCI_DEVICE_ID_AMD_VIPER_7411)
+ dev->device == PCI_DEVICE_ID_AMD_VIPER_7411)
t &= 0x0f;
else
t |= 0xf0;
{
struct pci_dev *dev = to_pci_dev(drive->hwif->dev);
unsigned long flags;
- int timing_shift = (drive->dn & 2) ? 16 : 0 + (drive->dn & 1) ? 0 : 8;
+ int timing_shift = (drive->dn ^ 1) * 8;
u32 pio_timing_data;
u16 pio_mode_data;
{
struct pci_dev *dev = to_pci_dev(drive->hwif->dev);
unsigned long flags;
- int timing_shift = (drive->dn & 2) ? 16 : 0 + (drive->dn & 1) ? 0 : 8;
+ int timing_shift = (drive->dn ^ 1) * 8;
u32 tmp32;
u16 tmp16;
u16 udma_ctl = 0;
static DEFINE_MUTEX(idecd_ref_mutex);
-static void ide_cd_release(struct kref *);
+static void ide_cd_release(struct device *);
static struct cdrom_info *ide_cd_get(struct gendisk *disk)
{
if (ide_device_get(cd->drive))
cd = NULL;
else
- kref_get(&cd->kref);
+ get_device(&cd->dev);
}
mutex_unlock(&idecd_ref_mutex);
ide_drive_t *drive = cd->drive;
mutex_lock(&idecd_ref_mutex);
- kref_put(&cd->kref, ide_cd_release);
+ put_device(&cd->dev);
ide_device_put(drive);
mutex_unlock(&idecd_ref_mutex);
}
bio_sectors = max(bio_sectors(failed_command->bio), 4U);
sector &= ~(bio_sectors - 1);
+ /*
+ * The SCSI specification allows for the value
+ * returned by READ CAPACITY to be up to 75 2K
+ * sectors past the last readable block.
+ * Therefore, if we hit a medium error within the
+ * last 75 2K sectors, we decrease the saved size
+ * value.
+ */
if (sector < get_capacity(info->disk) &&
drive->probed_capacity - sector < 4 * 75)
set_capacity(info->disk, sector);
ide_debug_log(IDE_DBG_FUNC, "Call %s\n", __func__);
ide_proc_unregister_driver(drive, info->driver);
-
+ device_del(&info->dev);
del_gendisk(info->disk);
- ide_cd_put(info);
+ mutex_lock(&idecd_ref_mutex);
+ put_device(&info->dev);
+ mutex_unlock(&idecd_ref_mutex);
}
-static void ide_cd_release(struct kref *kref)
+static void ide_cd_release(struct device *dev)
{
- struct cdrom_info *info = to_ide_drv(kref, cdrom_info);
+ struct cdrom_info *info = to_ide_drv(dev, cdrom_info);
struct cdrom_device_info *devinfo = &info->devinfo;
ide_drive_t *drive = info->drive;
struct gendisk *g = info->disk;
ide_init_disk(g, drive);
- kref_init(&info->kref);
+ info->dev.parent = &drive->gendev;
+ info->dev.release = ide_cd_release;
+ dev_set_name(&info->dev, dev_name(&drive->gendev));
+
+ if (device_register(&info->dev))
+ goto out_free_disk;
info->drive = drive;
info->driver = &ide_cdrom_driver;
g->driverfs_dev = &drive->gendev;
g->flags = GENHD_FL_CD | GENHD_FL_REMOVABLE;
if (ide_cdrom_setup(drive)) {
- ide_cd_release(&info->kref);
+ put_device(&info->dev);
goto failed;
}
add_disk(g);
return 0;
+out_free_disk:
+ put_disk(g);
out_free_cd:
kfree(info);
failed:
ide_drive_t *drive;
struct ide_driver *driver;
struct gendisk *disk;
- struct kref kref;
+ struct device dev;
/* Buffer for table of contents. NULL if we haven't allocated
a TOC buffer for this device yet. */
static DEFINE_MUTEX(ide_disk_ref_mutex);
-static void ide_disk_release(struct kref *);
+static void ide_disk_release(struct device *);
static struct ide_disk_obj *ide_disk_get(struct gendisk *disk)
{
if (ide_device_get(idkp->drive))
idkp = NULL;
else
- kref_get(&idkp->kref);
+ get_device(&idkp->dev);
}
mutex_unlock(&ide_disk_ref_mutex);
return idkp;
ide_drive_t *drive = idkp->drive;
mutex_lock(&ide_disk_ref_mutex);
- kref_put(&idkp->kref, ide_disk_release);
+ put_device(&idkp->dev);
ide_device_put(drive);
mutex_unlock(&ide_disk_ref_mutex);
}
struct gendisk *g = idkp->disk;
ide_proc_unregister_driver(drive, idkp->driver);
-
+ device_del(&idkp->dev);
del_gendisk(g);
-
drive->disk_ops->flush(drive);
- ide_disk_put(idkp);
+ mutex_lock(&ide_disk_ref_mutex);
+ put_device(&idkp->dev);
+ mutex_unlock(&ide_disk_ref_mutex);
}
-static void ide_disk_release(struct kref *kref)
+static void ide_disk_release(struct device *dev)
{
- struct ide_disk_obj *idkp = to_ide_drv(kref, ide_disk_obj);
+ struct ide_disk_obj *idkp = to_ide_drv(dev, ide_disk_obj);
ide_drive_t *drive = idkp->drive;
struct gendisk *g = idkp->disk;
ide_init_disk(g, drive);
- kref_init(&idkp->kref);
+ idkp->dev.parent = &drive->gendev;
+ idkp->dev.release = ide_disk_release;
+ dev_set_name(&idkp->dev, dev_name(&drive->gendev));
+
+ if (device_register(&idkp->dev))
+ goto out_free_disk;
idkp->drive = drive;
idkp->driver = &ide_gd_driver;
add_disk(g);
return 0;
+out_free_disk:
+ put_disk(g);
out_free_idkp:
kfree(idkp);
failed:
ide_drive_t *drive;
struct ide_driver *driver;
struct gendisk *disk;
- struct kref kref;
+ struct device dev;
unsigned int openers; /* protected by BKL for now */
/* Last failed packet command */
ide_drive_t *drive;
struct ide_driver *driver;
struct gendisk *disk;
- struct kref kref;
+ struct device dev;
/*
* failed_pc points to the last failed packet command, or contains
static struct class *idetape_sysfs_class;
-static void ide_tape_release(struct kref *);
+static void ide_tape_release(struct device *);
static struct ide_tape_obj *ide_tape_get(struct gendisk *disk)
{
if (ide_device_get(tape->drive))
tape = NULL;
else
- kref_get(&tape->kref);
+ get_device(&tape->dev);
}
mutex_unlock(&idetape_ref_mutex);
return tape;
ide_drive_t *drive = tape->drive;
mutex_lock(&idetape_ref_mutex);
- kref_put(&tape->kref, ide_tape_release);
+ put_device(&tape->dev);
ide_device_put(drive);
mutex_unlock(&idetape_ref_mutex);
}
mutex_lock(&idetape_ref_mutex);
tape = idetape_devs[i];
if (tape)
- kref_get(&tape->kref);
+ get_device(&tape->dev);
mutex_unlock(&idetape_ref_mutex);
return tape;
}
idetape_tape_t *tape = drive->driver_data;
ide_proc_unregister_driver(drive, tape->driver);
-
+ device_del(&tape->dev);
ide_unregister_region(tape->disk);
- ide_tape_put(tape);
+ mutex_lock(&idetape_ref_mutex);
+ put_device(&tape->dev);
+ mutex_unlock(&idetape_ref_mutex);
}
-static void ide_tape_release(struct kref *kref)
+static void ide_tape_release(struct device *dev)
{
- struct ide_tape_obj *tape = to_ide_drv(kref, ide_tape_obj);
+ struct ide_tape_obj *tape = to_ide_drv(dev, ide_tape_obj);
ide_drive_t *drive = tape->drive;
struct gendisk *g = tape->disk;
ide_init_disk(g, drive);
- kref_init(&tape->kref);
+ tape->dev.parent = &drive->gendev;
+ tape->dev.release = ide_tape_release;
+ dev_set_name(&tape->dev, dev_name(&drive->gendev));
+
+ if (device_register(&tape->dev))
+ goto out_free_disk;
tape->drive = drive;
tape->driver = &idetape_driver;
return 0;
+out_free_disk:
+ put_disk(g);
out_free_tape:
kfree(tape);
failed:
int a, b, i, j = 1;
unsigned int *dev_param_mask = (unsigned int *)kp->arg;
+ /* controller . device (0 or 1) [ : 1 (set) | 0 (clear) ] */
if (sscanf(s, "%d.%d:%d", &a, &b, &j) != 3 &&
sscanf(s, "%d.%d", &a, &b) != 2)
return -EINVAL;
if (j)
*dev_param_mask |= (1 << i);
else
- *dev_param_mask &= (1 << i);
+ *dev_param_mask &= ~(1 << i);
return 0;
}
{
int a, b, c = 0, h = 0, s = 0, i, j = 1;
+ /* controller . device (0 or 1) : Cylinders , Heads , Sectors */
+ /* controller . device (0 or 1) : 1 (use CHS) | 0 (ignore CHS) */
if (sscanf(str, "%d.%d:%d,%d,%d", &a, &b, &c, &h, &s) != 5 &&
sscanf(str, "%d.%d:%d", &a, &b, &j) != 3)
return -EINVAL;
if (j)
ide_disks |= (1 << i);
else
- ide_disks &= (1 << i);
+ ide_disks &= ~(1 << i);
ide_disks_chs[i].cyl = c;
ide_disks_chs[i].head = h;
{
int i, j = 1;
+ /* controller (ignore) */
+ /* controller : 1 (ignore) | 0 (use) */
if (sscanf(s, "%d:%d", &i, &j) != 2 && sscanf(s, "%d", &i) != 1)
return -EINVAL;
if (j)
ide_ignore_cable |= (1 << i);
else
- ide_ignore_cable &= (1 << i);
+ ide_ignore_cable &= ~(1 << i);
return 0;
}
* May be copied or modified under the terms of the GNU General Public License
* Based in part on the ITE vendor provided SCSI driver.
*
- * Documentation available from
- * http://www.ite.com.tw/pc/IT8212F_V04.pdf
- * Some other documents are NDA.
+ * Documentation:
+ * Datasheet is freely available, some other documents under NDA.
*
* The ITE8212 isn't exactly a standard IDE controller. It has two
* modes. In pass through mode then it is an IDE controller. In its smart
unregister_chrdev_region(IEEE1394_CORE_DEV, 256);
}
-module_init(ieee1394_init);
+fs_initcall(ieee1394_init);
module_exit(ieee1394_cleanup);
/* Exported symbols */
*/
static void atkbd_dell_laptop_keymap_fixup(struct atkbd *atkbd)
{
- const unsigned int forced_release_keys[] = {
+ static const unsigned int forced_release_keys[] = {
0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8f, 0x93,
};
int i;
*/
static void atkbd_hp_keymap_fixup(struct atkbd *atkbd)
{
- const unsigned int forced_release_keys[] = {
+ static const unsigned int forced_release_keys[] = {
0x94,
};
int i;
goto out;
}
- if (!pdata->debounce_time || !pdata->debounce_time > MAX_MULT ||
- !pdata->coldrive_time || !pdata->coldrive_time > MAX_MULT) {
+ if (!pdata->debounce_time || pdata->debounce_time > MAX_MULT ||
+ !pdata->coldrive_time || pdata->coldrive_time > MAX_MULT) {
printk(KERN_ERR DRV_NAME
": Invalid Debounce/Columdrive Time from pdata\n");
bfin_write_KPAD_MSEL(0xFF0); /* Default MSEL */
#define corgikbd_resume NULL
#endif
-static int __init corgikbd_probe(struct platform_device *pdev)
+static int __devinit corgikbd_probe(struct platform_device *pdev)
{
struct corgikbd *corgikbd;
struct input_dev *input_dev;
return err;
}
-static int corgikbd_remove(struct platform_device *pdev)
+static int __devexit corgikbd_remove(struct platform_device *pdev)
{
int i;
struct corgikbd *corgikbd = platform_get_drvdata(pdev);
static struct platform_driver corgikbd_driver = {
.probe = corgikbd_probe,
- .remove = corgikbd_remove,
+ .remove = __devexit_p(corgikbd_remove),
.suspend = corgikbd_suspend,
.resume = corgikbd_resume,
.driver = {
},
};
-static int __devinit corgikbd_init(void)
+static int __init corgikbd_init(void)
{
return platform_driver_register(&corgikbd_driver);
}
#define omap_kp_resume NULL
#endif
-static int __init omap_kp_probe(struct platform_device *pdev)
+static int __devinit omap_kp_probe(struct platform_device *pdev)
{
struct omap_kp *omap_kp;
struct input_dev *input_dev;
return -EINVAL;
}
-static int omap_kp_remove(struct platform_device *pdev)
+static int __devexit omap_kp_remove(struct platform_device *pdev)
{
struct omap_kp *omap_kp = platform_get_drvdata(pdev);
static struct platform_driver omap_kp_driver = {
.probe = omap_kp_probe,
- .remove = omap_kp_remove,
+ .remove = __devexit_p(omap_kp_remove),
.suspend = omap_kp_suspend,
.resume = omap_kp_resume,
.driver = {
},
};
-static int __devinit omap_kp_init(void)
+static int __init omap_kp_init(void)
{
printk(KERN_INFO "OMAP Keypad Driver\n");
return platform_driver_register(&omap_kp_driver);
#define spitzkbd_resume NULL
#endif
-static int __init spitzkbd_probe(struct platform_device *dev)
+static int __devinit spitzkbd_probe(struct platform_device *dev)
{
struct spitzkbd *spitzkbd;
struct input_dev *input_dev;
return err;
}
-static int spitzkbd_remove(struct platform_device *dev)
+static int __devexit spitzkbd_remove(struct platform_device *dev)
{
int i;
struct spitzkbd *spitzkbd = platform_get_drvdata(dev);
static struct platform_driver spitzkbd_driver = {
.probe = spitzkbd_probe,
- .remove = spitzkbd_remove,
+ .remove = __devexit_p(spitzkbd_remove),
.suspend = spitzkbd_suspend,
.resume = spitzkbd_resume,
.driver = {
},
};
-static int __devinit spitzkbd_init(void)
+static int __init spitzkbd_init(void)
{
return platform_driver_register(&spitzkbd_driver);
}
config MOUSE_PS2_LIFEBOOK
bool "Fujitsu Lifebook PS/2 mouse protocol extension" if EMBEDDED
default y
- depends on MOUSE_PS2
+ depends on MOUSE_PS2 && X86
help
Say Y here if you have a Fujitsu B-series Lifebook PS/2
TouchScreen connected to your system.
ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO)) {
- pr_err("elantech.c: sending Elantech magic knock failed.\n");
+ pr_debug("elantech.c: sending Elantech magic knock failed.\n");
return -1;
}
* set of magic numbers
*/
if (param[0] != 0x3c || param[1] != 0x03 || param[2] != 0xc8) {
- pr_info("elantech.c: unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n",
- param[0], param[1], param[2]);
+ pr_debug("elantech.c: "
+ "unexpected magic knock result 0x%02x, 0x%02x, 0x%02x.\n",
+ param[0], param[1], param[2]);
+ return -1;
+ }
+
+ /*
+ * Query touchpad's firmware version and see if it reports known
+ * value to avoid mis-detection. Logitech mice are known to respond
+ * to Elantech magic knock and there might be more.
+ */
+ if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) {
+ pr_debug("elantech.c: failed to query firmware version.\n");
+ return -1;
+ }
+
+ pr_debug("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n",
+ param[0], param[1], param[2]);
+
+ if (param[0] == 0 || param[1] != 0) {
+ pr_debug("elantech.c: Probably not a real Elantech touchpad. Aborting.\n");
return -1;
}
int i, error;
unsigned char param[3];
- etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL);
- psmouse->private = etd;
+ psmouse->private = etd = kzalloc(sizeof(struct elantech_data), GFP_KERNEL);
if (!etd)
return -1;
etd->parity[i] = etd->parity[i & (i - 1)] ^ 1;
/*
- * Find out what version hardware this is
+ * Do the version query again so we can store the result
*/
if (synaptics_send_cmd(psmouse, ETP_FW_VERSION_QUERY, param)) {
pr_err("elantech.c: failed to query firmware version.\n");
goto init_fail;
}
- pr_info("elantech.c: Elantech version query result 0x%02x, 0x%02x, 0x%02x.\n",
- param[0], param[1], param[2]);
etd->fw_version_maj = param[0];
etd->fw_version_min = param[2];
__raw_writel(v, trkball->mmio_base + TBCR);
- while (i--) {
+ while (--i) {
if (__raw_readl(trkball->mmio_base + TBCR) == v)
break;
msleep(1);
static int synaptics_query_hardware(struct psmouse *psmouse)
{
- int retries = 0;
-
- while ((retries++ < 3) && psmouse_reset(psmouse))
- /* empty */;
-
if (synaptics_identify(psmouse))
return -1;
if (synaptics_model_id(psmouse))
struct synaptics_data *priv = psmouse->private;
struct synaptics_data old_priv = *priv;
+ psmouse_reset(psmouse);
+
if (synaptics_detect(psmouse, 0))
return -1;
if (!priv)
return -1;
+ psmouse_reset(psmouse);
+
if (synaptics_query_hardware(psmouse)) {
printk(KERN_ERR "Unable to query Synaptics hardware.\n");
goto init_fail;
struct amba_kmi_port *kmi = io->port_data;
unsigned int timeleft = 10000; /* timeout in 100ms */
- while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && timeleft--)
+ while ((readb(KMISTAT) & KMISTAT_TXEMPTY) == 0 && --timeleft)
udelay(10);
if (timeleft)
io->write = amba_kmi_write;
io->open = amba_kmi_open;
io->close = amba_kmi_close;
- strlcpy(io->name, dev->dev.bus_id, sizeof(io->name));
- strlcpy(io->phys, dev->dev.bus_id, sizeof(io->phys));
+ strlcpy(io->name, dev_name(&dev->dev), sizeof(io->name));
+ strlcpy(io->phys, dev_name(&dev->dev), sizeof(io->phys));
io->port_data = kmi;
io->dev.parent = &dev->dev;
snprintf(serio->name, sizeof(serio->name), "GSC PS/2 %s",
(ps2port->id == GSC_ID_KEYBOARD) ? "keyboard" : "mouse");
- strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys));
+ strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys));
serio->id.type = SERIO_8042;
serio->write = gscps2_write;
serio->open = gscps2_open;
serio->write = ps2_write;
serio->open = ps2_open;
serio->close = ps2_close;
- strlcpy(serio->name, dev->dev.bus_id, sizeof(serio->name));
- strlcpy(serio->phys, dev->dev.bus_id, sizeof(serio->phys));
+ strlcpy(serio->name, dev_name(&dev->dev), sizeof(serio->name));
+ strlcpy(serio->phys, dev_name(&dev->dev), sizeof(serio->phys));
serio->port_data = ps2if;
serio->dev.parent = &dev->dev;
ps2if->io = serio;
ts_dev->bufferedmeasure = 0;
snprintf(ts_dev->phys, sizeof(ts_dev->phys),
- "%s/input0", pdev->dev.bus_id);
+ "%s/input0", dev_name(&pdev->dev));
input_dev->name = "atmel touch screen controller";
input_dev->phys = ts_dev->phys;
#define corgits_resume NULL
#endif
-static int __init corgits_probe(struct platform_device *pdev)
+static int __devinit corgits_probe(struct platform_device *pdev)
{
struct corgi_ts *corgi_ts;
struct input_dev *input_dev;
return err;
}
-static int corgits_remove(struct platform_device *pdev)
+static int __devexit corgits_remove(struct platform_device *pdev)
{
struct corgi_ts *corgi_ts = platform_get_drvdata(pdev);
corgi_ts->machinfo->put_hsync();
input_unregister_device(corgi_ts->input);
kfree(corgi_ts);
+
return 0;
}
static struct platform_driver corgits_driver = {
.probe = corgits_probe,
- .remove = corgits_remove,
+ .remove = __devexit_p(corgits_remove),
.suspend = corgits_suspend,
.resume = corgits_resume,
.driver = {
},
};
-static int __devinit corgits_init(void)
+static int __init corgits_init(void)
{
return platform_driver_register(&corgits_driver);
}
pdata->init_platform_hw();
- snprintf(ts->phys, sizeof(ts->phys), "%s/input0", client->dev.bus_id);
+ snprintf(ts->phys, sizeof(ts->phys),
+ "%s/input0", dev_name(&client->dev));
input_dev->name = "TSC2007 Touchscreen";
input_dev->phys = ts->phys;
module_param(swap_xy, bool, 0644);
MODULE_PARM_DESC(swap_xy, "If set X and Y axes are swapped.");
+static int hwcalib_xy;
+module_param(hwcalib_xy, bool, 0644);
+MODULE_PARM_DESC(hwcalib_xy, "If set hw-calibrated X/Y are used if available");
+
/* device specifc data/functions */
struct usbtouch_usb;
struct usbtouch_device_info {
#define USB_DEVICE_HID_CLASS(vend, prod) \
.match_flags = USB_DEVICE_ID_MATCH_INT_CLASS \
+ | USB_DEVICE_ID_MATCH_INT_PROTOCOL \
| USB_DEVICE_ID_MATCH_DEVICE, \
.idVendor = (vend), \
.idProduct = (prod), \
static int mtouch_read_data(struct usbtouch_usb *dev, unsigned char *pkt)
{
- dev->x = (pkt[8] << 8) | pkt[7];
- dev->y = (pkt[10] << 8) | pkt[9];
+ if (hwcalib_xy) {
+ dev->x = (pkt[4] << 8) | pkt[3];
+ dev->y = 0xffff - ((pkt[6] << 8) | pkt[5]);
+ } else {
+ dev->x = (pkt[8] << 8) | pkt[7];
+ dev->y = (pkt[10] << 8) | pkt[9];
+ }
dev->touch = (pkt[2] & 0x40) ? 1 : 0;
return 1;
return ret;
}
+ /* Default min/max xy are the raw values, override if using hw-calib */
+ if (hwcalib_xy) {
+ input_set_abs_params(usbtouch->input, ABS_X, 0, 0xffff, 0, 0);
+ input_set_abs_params(usbtouch->input, ABS_Y, 0, 0xffff, 0, 0);
+ }
+
return 0;
}
#endif
update_head_pos(mirror, r1_bio);
if (atomic_dec_and_test(&r1_bio->remaining)) {
- md_done_sync(mddev, r1_bio->sectors, uptodate);
+ sector_t s = r1_bio->sectors;
put_buf(r1_bio);
+ md_done_sync(mddev, s, uptodate);
}
}
/* for reconstruct, we always reschedule after a read.
* for resync, only after all reads
*/
+ rdev_dec_pending(conf->mirrors[d].rdev, conf->mddev);
if (test_bit(R10BIO_IsRecover, &r10_bio->state) ||
atomic_dec_and_test(&r10_bio->remaining)) {
/* we have read all the blocks,
*/
reschedule_retry(r10_bio);
}
- rdev_dec_pending(conf->mirrors[d].rdev, conf->mddev);
}
static void end_sync_write(struct bio *bio, int error)
update_head_pos(i, r10_bio);
+ rdev_dec_pending(conf->mirrors[d].rdev, mddev);
while (atomic_dec_and_test(&r10_bio->remaining)) {
if (r10_bio->master_bio == NULL) {
/* the primary of several recovery bios */
- md_done_sync(mddev, r10_bio->sectors, 1);
+ sector_t s = r10_bio->sectors;
put_buf(r10_bio);
+ md_done_sync(mddev, s, 1);
break;
} else {
r10bio_t *r10_bio2 = (r10bio_t *)r10_bio->master_bio;
r10_bio = r10_bio2;
}
}
- rdev_dec_pending(conf->mirrors[d].rdev, mddev);
}
/*
if (!go_faster && conf->nr_waiting)
msleep_interruptible(1000);
- bitmap_cond_end_sync(mddev->bitmap, sector_nr);
-
/* Again, very different code for resync and recovery.
* Both must result in an r10bio with a list of bios that
* have bi_end_io, bi_sector, bi_bdev set,
/* resync. Schedule a read for every block at this virt offset */
int count = 0;
+ bitmap_cond_end_sync(mddev->bitmap, sector_nr);
+
if (!bitmap_start_sync(mddev->bitmap, sector_nr,
&sync_blocks, mddev->degraded) &&
!conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
/* There is nowhere to write, so all non-sync
* drives must be failed, so try the next chunk...
*/
- {
- sector_t sec = max_sector - sector_nr;
- sectors_skipped += sec;
+ if (sector_nr + max_sync < max_sector)
+ max_sector = sector_nr + max_sync;
+
+ sectors_skipped += (max_sector - sector_nr);
chunks_skipped ++;
sector_nr = max_sector;
goto skipped;
- }
}
static int run(mddev_t *mddev)
return 0;
}
+EXPORT_SYMBOL(flexcop_pid_feed_control);
void flexcop_hw_filter_init(struct flexcop_device *fc)
{
module_param(enable_pid_filtering, int, 0444);
MODULE_PARM_DESC(enable_pid_filtering, "enable hardware pid filtering: supported values: 0 (fullts), 1");
-static int irq_chk_intv;
+static int irq_chk_intv = 100;
module_param(irq_chk_intv, int, 0644);
-MODULE_PARM_DESC(irq_chk_intv, "set the interval for IRQ watchdog (currently just debugging).");
+MODULE_PARM_DESC(irq_chk_intv, "set the interval for IRQ streaming watchdog.");
#ifdef CONFIG_DVB_B2C2_FLEXCOP_DEBUG
#define dprintk(level,args...) \
static int debug;
module_param(debug, int, 0644);
-MODULE_PARM_DESC(debug, "set debug level (1=info,2=regs,4=TS,8=irqdma (|-able))." DEBSTATUS);
+MODULE_PARM_DESC(debug,
+ "set debug level (1=info,2=regs,4=TS,8=irqdma,16=check (|-able))."
+ DEBSTATUS);
#define DRIVER_VERSION "0.1"
#define DRIVER_NAME "Technisat/B2C2 FlexCop II/IIb/III Digital TV PCI Driver"
int active_dma1_addr; /* 0 = addr0 of dma1; 1 = addr1 of dma1 */
u32 last_dma1_cur_pos; /* position of the pointer last time the timer/packet irq occured */
int count;
+ int count_prev;
+ int stream_problem;
spinlock_t irq_lock;
container_of(work, struct flexcop_pci, irq_check_work.work);
struct flexcop_device *fc = fc_pci->fc_dev;
- flexcop_ibi_value v = fc->read_ibi_reg(fc,sram_dest_reg_714);
-
- flexcop_dump_reg(fc_pci->fc_dev,dma1_000,4);
-
- if (v.sram_dest_reg_714.net_ovflow_error)
- deb_chk("sram net_ovflow_error\n");
- if (v.sram_dest_reg_714.media_ovflow_error)
- deb_chk("sram media_ovflow_error\n");
- if (v.sram_dest_reg_714.cai_ovflow_error)
- deb_chk("sram cai_ovflow_error\n");
- if (v.sram_dest_reg_714.cai_ovflow_error)
- deb_chk("sram cai_ovflow_error\n");
+ if (fc->feedcount) {
+
+ if (fc_pci->count == fc_pci->count_prev) {
+ deb_chk("no IRQ since the last check\n");
+ if (fc_pci->stream_problem++ == 3) {
+ struct dvb_demux_feed *feed;
+
+ spin_lock_irq(&fc->demux.lock);
+ list_for_each_entry(feed, &fc->demux.feed_list,
+ list_head) {
+ flexcop_pid_feed_control(fc, feed, 0);
+ }
+
+ list_for_each_entry(feed, &fc->demux.feed_list,
+ list_head) {
+ flexcop_pid_feed_control(fc, feed, 1);
+ }
+ spin_unlock_irq(&fc->demux.lock);
+
+ fc_pci->stream_problem = 0;
+ }
+ } else {
+ fc_pci->stream_problem = 0;
+ fc_pci->count_prev = fc_pci->count;
+ }
+ }
schedule_delayed_work(&fc_pci->irq_check_work,
msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv));
flexcop_dma_control_timer_irq(fc,FC_DMA_1,1);
deb_irq("IRQ enabled\n");
+ fc_pci->count_prev = fc_pci->count;
+
// fc_pci->active_dma1_addr = 0;
// flexcop_dma_control_size_irq(fc,FC_DMA_1,1);
- if (irq_chk_intv > 0)
- schedule_delayed_work(&fc_pci->irq_check_work,
- msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv));
} else {
- if (irq_chk_intv > 0)
- cancel_delayed_work(&fc_pci->irq_check_work);
-
flexcop_dma_control_timer_irq(fc,FC_DMA_1,0);
deb_irq("IRQ disabled\n");
IRQF_SHARED, DRIVER_NAME, fc_pci)) != 0)
goto err_pci_iounmap;
-
-
fc_pci->init_state |= FC_PCI_INIT;
return ret;
INIT_DELAYED_WORK(&fc_pci->irq_check_work, flexcop_pci_irq_check_work);
+ if (irq_chk_intv > 0)
+ schedule_delayed_work(&fc_pci->irq_check_work,
+ msecs_to_jiffies(irq_chk_intv < 100 ? 100 : irq_chk_intv));
+
return ret;
err_fc_exit:
{
struct flexcop_pci *fc_pci = pci_get_drvdata(pdev);
+ if (irq_chk_intv > 0)
+ cancel_delayed_work(&fc_pci->irq_check_work);
+
flexcop_pci_dma_exit(fc_pci);
flexcop_device_exit(fc_pci->fc_dev);
flexcop_pci_exit(fc_pci);
v210.sw_reset_210.Block_reset_enable = 0xb2;
fc->write_ibi_reg(fc,sw_reset_210,v210);
- msleep(1);
-
+ udelay(1000);
fc->write_ibi_reg(fc,ctrl_208,v208_save);
}
pcm->info_flags = 0;
pcm->private_data = dev;
strcpy(pcm->name, "Empia 28xx Capture");
+
+ snd_card_set_dev(card, &dev->udev->dev);
strcpy(card->driver, "Empia Em28xx Audio");
strcpy(card->shortname, "Em28xx Audio");
strcpy(card->longname, "Empia Em28xx Audio");
{
struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
struct pxa_camera_dev *pcdev = ici->priv;
- const struct soc_camera_data_format *host_fmt, *cam_fmt = NULL;
- const struct soc_camera_format_xlate *xlate;
+ const struct soc_camera_data_format *cam_fmt = NULL;
+ const struct soc_camera_format_xlate *xlate = NULL;
struct soc_camera_sense sense = {
.master_clock = pcdev->mclk,
.pixel_clock_max = pcdev->ciclk / 4,
};
- int ret, buswidth;
+ int ret;
- xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
- if (!xlate) {
- dev_warn(&ici->dev, "Format %x not found\n", pixfmt);
- return -EINVAL;
- }
+ if (pixfmt) {
+ xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
+ if (!xlate) {
+ dev_warn(&ici->dev, "Format %x not found\n", pixfmt);
+ return -EINVAL;
+ }
- buswidth = xlate->buswidth;
- host_fmt = xlate->host_fmt;
- cam_fmt = xlate->cam_fmt;
+ cam_fmt = xlate->cam_fmt;
+ }
/* If PCLK is used to latch data from the sensor, check sense */
if (pcdev->platform_flags & PXA_CAMERA_PCLK_EN)
}
if (pixfmt && !ret) {
- icd->buswidth = buswidth;
- icd->current_fmt = host_fmt;
+ icd->buswidth = xlate->buswidth;
+ icd->current_fmt = xlate->host_fmt;
}
return ret;
const struct soc_camera_format_xlate *xlate;
int ret;
+ if (!pixfmt)
+ return icd->ops->set_fmt(icd, pixfmt, rect);
+
xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
if (!xlate) {
dev_warn(&ici->dev, "Format %x not found\n", pixfmt);
return -EINVAL;
}
- switch (pixfmt) {
- case 0: /* Only geometry change */
- ret = icd->ops->set_fmt(icd, pixfmt, rect);
- break;
- default:
- ret = icd->ops->set_fmt(icd, xlate->cam_fmt->fourcc, rect);
- }
+ ret = icd->ops->set_fmt(icd, xlate->cam_fmt->fourcc, rect);
- if (pixfmt && !ret) {
+ if (!ret) {
icd->buswidth = xlate->buswidth;
icd->current_fmt = xlate->host_fmt;
pcdev->camera_fmt = xlate->cam_fmt;
usb_to_input_id(udev, &input->id);
input->dev.parent = &dev->intf->dev;
- set_bit(EV_KEY, input->evbit);
- set_bit(BTN_0, input->keybit);
+ __set_bit(EV_KEY, input->evbit);
+ __set_bit(KEY_CAMERA, input->keybit);
if ((ret = input_register_device(input)) < 0)
goto error;
static void uvc_input_report_key(struct uvc_device *dev, unsigned int code,
int value)
{
- if (dev->input)
+ if (dev->input) {
input_report_key(dev->input, code, value);
+ input_sync(dev->input);
+ }
}
#else
return;
uvc_trace(UVC_TRACE_STATUS, "Button (intf %u) %s len %d\n",
data[1], data[3] ? "pressed" : "released", len);
- uvc_input_report_key(dev, BTN_0, data[3]);
+ uvc_input_report_key(dev, KEY_CAMERA, data[3]);
} else {
uvc_trace(UVC_TRACE_STATUS, "Stream %u error event %02x %02x "
"len %d.\n", data[1], data[2], data[3], len);
controllers (default=0)");
static int mpt_msi_enable_sas;
-module_param(mpt_msi_enable_sas, int, 1);
+module_param(mpt_msi_enable_sas, int, 0);
MODULE_PARM_DESC(mpt_msi_enable_sas, " Enable MSI Support for SAS \
- controllers (default=1)");
+ controllers (default=0)");
static int mpt_channel_mapping;
static struct pci_device_id ilo_devices[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_COMPAQ, 0xB204) },
+ { PCI_DEVICE(PCI_VENDOR_ID_HP, 0x3307) },
{ }
};
MODULE_DEVICE_TABLE(pci, ilo_devices);
class_destroy(ilo_class);
}
-MODULE_VERSION("0.06");
+MODULE_VERSION("1.0");
MODULE_ALIAS(ILO_NAME);
MODULE_DESCRIPTION(ILO_NAME);
MODULE_AUTHOR("David Altobelli <david.altobelli@hp.com>");
static const struct sdhci_pci_fixes sdhci_cafe = {
.quirks = SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER |
+ SDHCI_QUIRK_NO_BUSY_IRQ |
SDHCI_QUIRK_BROKEN_TIMEOUT_VAL,
};
if (host->cmd->data)
DBG("Cannot wait for busy signal when also "
"doing a data transfer");
- else
+ else if (!(host->quirks & SDHCI_QUIRK_NO_BUSY_IRQ))
return;
+
+ /* The controller does not support the end-of-busy IRQ,
+ * fall through and take the SDHCI_INT_RESPONSE */
}
if (intmask & SDHCI_INT_RESPONSE)
#define SDHCI_QUIRK_BROKEN_TIMEOUT_VAL (1<<12)
/* Controller has an issue with buffer bits for small transfers */
#define SDHCI_QUIRK_BROKEN_SMALL_PIO (1<<13)
+/* Controller does not provide transfer-complete interrupt when not busy */
+#define SDHCI_QUIRK_NO_BUSY_IRQ (1<<14)
int irq; /* Device IRQ */
void __iomem * ioaddr; /* Mapped address */
static int maprom_write (struct mtd_info *, loff_t, size_t, size_t *, const u_char *);
static void maprom_nop (struct mtd_info *);
static struct mtd_info *map_rom_probe(struct map_info *map);
+static int maprom_erase (struct mtd_info *mtd, struct erase_info *info);
static struct mtd_chip_driver maprom_chipdrv = {
.probe = map_rom_probe,
mtd->read = maprom_read;
mtd->write = maprom_write;
mtd->sync = maprom_nop;
+ mtd->erase = maprom_erase;
mtd->flags = MTD_CAP_ROM;
mtd->erasesize = map->size;
mtd->writesize = 1;
return -EIO;
}
+static int maprom_erase (struct mtd_info *mtd, struct erase_info *info)
+{
+ /* We do our best 8) */
+ return -EROFS;
+}
+
static int __init map_rom_init(void)
{
register_mtd_chip_driver(&maprom_chipdrv);
if (*(szlength) != '+') {
devlength = simple_strtoul(szlength, &buffer, 0);
devlength = handle_unit(devlength, buffer) - devstart;
+ if (devlength < devstart)
+ goto err_out;
+
+ devlength -= devstart;
} else {
devlength = simple_strtoul(szlength + 1, &buffer, 0);
devlength = handle_unit(devlength, buffer);
}
T("slram: devname=%s, devstart=0x%lx, devlength=0x%lx\n",
devname, devstart, devlength);
- if ((devstart < 0) || (devlength < 0) || (devlength % SLRAM_BLK_SZ != 0)) {
- E("slram: Illegal start / length parameter.\n");
- return(-EINVAL);
- }
+ if (devlength % SLRAM_BLK_SZ != 0)
+ goto err_out;
if ((devstart = register_device(devname, devstart, devlength))){
unregister_devices();
return((int)devstart);
}
return(0);
+
+err_out:
+ E("slram: Illegal length parameter.\n");
+ return(-EINVAL);
}
#ifndef MODULE
DDR memories, intended for battery-operated systems.
config MTD_QINFO_PROBE
+ depends on MTD_LPDDR
tristate "Detect flash chips by QINFO probe"
help
Device Information for LPDDR chips is offered through the Overlay
config MTD_BFIN_ASYNC
tristate "Blackfin BF533-STAMP Flash Chip Support"
- depends on BFIN533_STAMP && MTD_CFI
+ depends on BFIN533_STAMP && MTD_CFI && MTD_COMPLEX_MAPPINGS
select MTD_PARTITIONS
default y
help
if (gpio_request(state->enet_flash_pin, DRIVER_NAME)) {
pr_devinit(KERN_ERR DRIVER_NAME ": Failed to request gpio %d\n", state->enet_flash_pin);
+ kfree(state);
return -EBUSY;
}
gpio_direction_output(state->enet_flash_pin, 1);
pr_devinit(KERN_NOTICE DRIVER_NAME ": probing %d-bit flash bus\n", state->map.bankwidth * 8);
state->mtd = do_map_probe(memory->name, &state->map);
- if (!state->mtd)
+ if (!state->mtd) {
+ gpio_free(state->enet_flash_pin);
+ kfree(state);
return -ENXIO;
+ }
#ifdef CONFIG_MTD_PARTITIONS
ret = parse_mtd_partitions(state->mtd, part_probe_types, &pdata->parts, 0);
{ 0, }
};
+#if 0
MODULE_DEVICE_TABLE(pci, ck804xrom_pci_tbl);
-#if 0
static struct pci_driver ck804xrom_driver = {
.name = MOD_NAME,
.id_table = ck804xrom_pci_tbl,
struct map_info map[MAX_RESOURCES];
#ifdef CONFIG_MTD_PARTITIONS
int nr_parts;
+ struct mtd_partition *parts;
#endif
};
physmap_data = dev->dev.platform_data;
-#ifdef CONFIG_MTD_CONCAT
- if (info->cmtd != info->mtd[0]) {
+#ifdef CONFIG_MTD_PARTITIONS
+ if (info->nr_parts) {
+ del_mtd_partitions(info->cmtd);
+ kfree(info->parts);
+ } else if (physmap_data->nr_parts)
+ del_mtd_partitions(info->cmtd);
+ else
del_mtd_device(info->cmtd);
+#else
+ del_mtd_device(info->cmtd);
+#endif
+
+#ifdef CONFIG_MTD_CONCAT
+ if (info->cmtd != info->mtd[0])
mtd_concat_destroy(info->cmtd);
- }
#endif
for (i = 0; i < MAX_RESOURCES; i++) {
- if (info->mtd[i] != NULL) {
-#ifdef CONFIG_MTD_PARTITIONS
- if (info->nr_parts || physmap_data->nr_parts)
- del_mtd_partitions(info->mtd[i]);
- else
- del_mtd_device(info->mtd[i]);
-#else
- del_mtd_device(info->mtd[i]);
-#endif
+ if (info->mtd[i] != NULL)
map_destroy(info->mtd[i]);
- }
}
return 0;
}
int err = 0;
int i;
int devices_found = 0;
-#ifdef CONFIG_MTD_PARTITIONS
- struct mtd_partition *parts;
-#endif
physmap_data = dev->dev.platform_data;
if (physmap_data == NULL)
goto err_out;
#ifdef CONFIG_MTD_PARTITIONS
- err = parse_mtd_partitions(info->cmtd, part_probe_types, &parts, 0);
+ err = parse_mtd_partitions(info->cmtd, part_probe_types,
+ &info->parts, 0);
if (err > 0) {
- add_mtd_partitions(info->cmtd, parts, err);
- kfree(parts);
+ add_mtd_partitions(info->cmtd, info->parts, err);
+ info->nr_parts = err;
return 0;
}
static struct platform_driver orion_nand_driver = {
.probe = orion_nand_probe,
- .remove = orion_nand_remove,
+ .remove = __devexit_p(orion_nand_remove),
.driver = {
.name = "orion_nand",
.owner = THIS_MODULE,
#
obj-$(CONFIG_ARM_AM79C961A) += am79c961a.o
-obj-$(CONFIG_ARM_ETHERH) += etherh.o ../8390.o
+obj-$(CONFIG_ARM_ETHERH) += etherh.o
obj-$(CONFIG_ARM_ETHER3) += ether3.o
obj-$(CONFIG_ARM_ETHER1) += ether1.o
obj-$(CONFIG_ARM_AT91_ETHER) += at91_ether.o
.ndo_open = etherh_open,
.ndo_stop = etherh_close,
.ndo_set_config = etherh_set_config,
- .ndo_start_xmit = ei_start_xmit,
- .ndo_tx_timeout = ei_tx_timeout,
- .ndo_get_stats = ei_get_stats,
- .ndo_set_multicast_list = ei_set_multicast_list,
+ .ndo_start_xmit = __ei_start_xmit,
+ .ndo_tx_timeout = __ei_tx_timeout,
+ .ndo_get_stats = __ei_get_stats,
+ .ndo_set_multicast_list = __ei_set_multicast_list,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = eth_mac_addr,
.ndo_change_mtu = eth_change_mtu,
#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = ei_poll,
+ .ndo_poll_controller = __ei_poll,
#endif
};
static void b44_chip_reset(struct b44 *bp, int reset_kind)
{
struct ssb_device *sdev = bp->sdev;
+ bool was_enabled;
- if (ssb_device_is_enabled(bp->sdev)) {
+ was_enabled = ssb_device_is_enabled(bp->sdev);
+
+ ssb_device_enable(bp->sdev, 0);
+ ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev);
+
+ if (was_enabled) {
bw32(bp, B44_RCV_LAZY, 0);
bw32(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE);
b44_wait_bit(bp, B44_ENET_CTRL, ENET_CTRL_DISABLE, 200, 1);
}
bw32(bp, B44_DMARX_CTRL, 0);
bp->rx_prod = bp->rx_cons = 0;
- } else
- ssb_pcicore_dev_irqvecs_enable(&sdev->bus->pcicore, sdev);
+ }
- ssb_device_enable(bp->sdev, 0);
b44_clear_stats(bp);
/*
struct net_device *dev = ssb_get_drvdata(sdev);
unregister_netdev(dev);
+ ssb_device_disable(sdev, 0);
ssb_bus_may_powerdown(sdev->bus);
free_netdev(dev);
ssb_pcihost_set_power_state(sdev, PCI_D3hot);
spin_lock_irqsave(&priv->txlock, flags);
/* check if there is space to queue this packet */
- if (nr_frags > priv->num_txbdfree) {
+ if ((nr_frags+1) > priv->num_txbdfree) {
/* no space, stop the queue */
netif_stop_queue(dev);
dev->stats.tx_fifo_errors++;
if (this_dev != 0) break; /* only autoprobe 1st one */
printk(KERN_NOTICE "hp-plus.c: Presently autoprobing (not recommended) for a single card.\n");
}
- dev = alloc_ei_netdev();
+ dev = alloc_eip_netdev();
if (!dev)
break;
dev->irq = irq[this_dev];
adapter->pci_mem_read = netxen_nic_pci_mem_read_2M;
adapter->pci_mem_write = netxen_nic_pci_mem_write_2M;
- mem_ptr0 = ioremap(mem_base, mem_len);
+ mem_ptr0 = pci_ioremap_bar(pdev, 0);
+ if (mem_ptr0 == NULL) {
+ dev_err(&pdev->dev, "failed to map PCI bar 0\n");
+ return -EIO;
+ }
+
pci_len0 = mem_len;
first_page_group_start = 0;
first_page_group_end = 0;
* See if the firmware gave us a virtual-physical port mapping.
*/
adapter->physical_port = adapter->portnum;
- i = adapter->pci_read_normalize(adapter, CRB_V2P(adapter->portnum));
- if (i != 0x55555555)
- adapter->physical_port = i;
+ if (adapter->fw_major < 4) {
+ i = adapter->pci_read_normalize(adapter,
+ CRB_V2P(adapter->portnum));
+ if (i != 0x55555555)
+ adapter->physical_port = i;
+ }
adapter->flags &= ~(NETXEN_NIC_MSI_ENABLED | NETXEN_NIC_MSIX_ENABLED);
#define RTL8169_TX_TIMEOUT (6*HZ)
#define RTL8169_PHY_TIMEOUT (10*HZ)
-#define RTL_EEPROM_SIG cpu_to_le32(0x8129)
-#define RTL_EEPROM_SIG_MASK cpu_to_le32(0xffff)
+#define RTL_EEPROM_SIG 0x8129
#define RTL_EEPROM_SIG_ADDR 0x0000
+#define RTL_EEPROM_MAC_ADDR 0x0007
/* write/read MMIO register */
#define RTL_W8(reg, val8) writeb ((val8), ioaddr + (reg))
/* Cfg9346Bits */
Cfg9346_Lock = 0x00,
Cfg9346_Unlock = 0xc0,
+ Cfg9346_Program = 0x80, /* Programming mode */
+ Cfg9346_EECS = 0x08, /* Chip select */
+ Cfg9346_EESK = 0x04, /* Serial data clock */
+ Cfg9346_EEDI = 0x02, /* Data input */
+ Cfg9346_EEDO = 0x01, /* Data output */
/* rx_mode_bits */
AcceptErr = 0x20,
/* RxConfigBits */
RxCfgFIFOShift = 13,
RxCfgDMAShift = 8,
+ RxCfg9356SEL = 6, /* EEPROM type: 0 = 9346, 1 = 9356 */
/* TxConfigBits */
TxInterFrameGapShift = 24,
};
+/* Delay between EEPROM clock transitions. Force out buffered PCI writes. */
+#define RTL_EEPROM_DELAY() RTL_R8(Cfg9346)
+#define RTL_EEPROM_READ_CMD 6
+
+/* read 16bit word stored in EEPROM. EEPROM is addressed by words. */
+static u16 rtl_eeprom_read(void __iomem *ioaddr, int addr)
+{
+ u16 result = 0;
+ int cmd, cmd_len, i;
+
+ /* check for EEPROM address size (in bits) */
+ if (RTL_R32(RxConfig) & (1 << RxCfg9356SEL)) {
+ /* EEPROM is 93C56 */
+ cmd_len = 3 + 8; /* 3 bits for command id and 8 for address */
+ cmd = (RTL_EEPROM_READ_CMD << 8) | (addr & 0xff);
+ } else {
+ /* EEPROM is 93C46 */
+ cmd_len = 3 + 6; /* 3 bits for command id and 6 for address */
+ cmd = (RTL_EEPROM_READ_CMD << 6) | (addr & 0x3f);
+ }
+
+ /* enter programming mode */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+
+ /* write command and requested address */
+ while (cmd_len--) {
+ u8 x = Cfg9346_Program | Cfg9346_EECS;
+
+ x |= (cmd & (1 << cmd_len)) ? Cfg9346_EEDI : 0;
+
+ /* write a bit */
+ RTL_W8(Cfg9346, x);
+ RTL_EEPROM_DELAY();
+
+ /* raise clock */
+ RTL_W8(Cfg9346, x | Cfg9346_EESK);
+ RTL_EEPROM_DELAY();
+ }
+
+ /* lower clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+
+ /* read back 16bit value */
+ for (i = 16; i > 0; i--) {
+ /* raise clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS | Cfg9346_EESK);
+ RTL_EEPROM_DELAY();
+
+ result <<= 1;
+ result |= (RTL_R8(Cfg9346) & Cfg9346_EEDO) ? 1 : 0;
+
+ /* lower clock */
+ RTL_W8(Cfg9346, Cfg9346_Program | Cfg9346_EECS);
+ RTL_EEPROM_DELAY();
+ }
+
+ RTL_W8(Cfg9346, Cfg9346_Program);
+ /* leave programming mode */
+ RTL_W8(Cfg9346, Cfg9346_Lock);
+
+ return result;
+}
+
+static void rtl_init_mac_address(struct rtl8169_private *tp,
+ void __iomem *ioaddr)
+{
+ struct pci_dev *pdev = tp->pci_dev;
+ u16 x;
+ u8 mac[8];
+
+ /* read EEPROM signature */
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_SIG_ADDR);
+
+ if (x != RTL_EEPROM_SIG) {
+ dev_info(&pdev->dev, "Missing EEPROM signature: %04x\n", x);
+ return;
+ }
+
+ /* read MAC address */
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR);
+ mac[0] = x & 0xff;
+ mac[1] = x >> 8;
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 1);
+ mac[2] = x & 0xff;
+ mac[3] = x >> 8;
+ x = rtl_eeprom_read(ioaddr, RTL_EEPROM_MAC_ADDR + 2);
+ mac[4] = x & 0xff;
+ mac[5] = x >> 8;
+
+ if (netif_msg_probe(tp)) {
+ DECLARE_MAC_BUF(buf);
+
+ dev_info(&pdev->dev, "MAC address found in EEPROM: %s\n",
+ print_mac(buf, mac));
+ }
+
+ if (is_valid_ether_addr(mac))
+ rtl_rar_set(tp, mac);
+}
+
static int __devinit
rtl8169_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
tp->mmio_addr = ioaddr;
+ rtl_init_mac_address(tp, ioaddr);
+
/* Get MAC address */
for (i = 0; i < MAC_ADDR_LEN; i++)
dev->dev_addr[i] = RTL_R8(MAC0 + i);
// Cables-to-Go USB Ethernet Adapter
USB_DEVICE(0x0b95, 0x772a),
.driver_info = (unsigned long) &ax88772_info,
+}, {
+ // ABOCOM for pci
+ USB_DEVICE(0x14ea, 0xab11),
+ .driver_info = (unsigned long) &ax88178_info,
+}, {
+ // ASIX 88772a
+ USB_DEVICE(0x0db0, 0xa877),
+ .driver_info = (unsigned long) &ax88772_info,
},
{ }, // END
};
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long) &cdc_info,
+}, {
+ /* Ericsson F3507g */
+ USB_DEVICE_AND_INTERFACE_INFO(0x0bdb, 0x1900, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long) &cdc_info,
},
{ }, // END
};
if (dev->mii.mdio_read)
return mii_link_ok(&dev->mii);
- /* Otherwise, say we're up (to avoid breaking scripts) */
- return 1;
+ /* Otherwise, dtrt for drivers calling netif_carrier_{on,off} */
+ return ethtool_op_get_link(net);
}
EXPORT_SYMBOL_GPL(usbnet_get_link);
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MDLM,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long) &bogus_mdlm_info,
+}, {
+ /* Motorola MOTOMAGX phones */
+ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x6425, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long) &bogus_mdlm_info,
},
/* Olympus has some models with a Zaurus-compatible option.
return 0;
}
+static int veth_close(struct net_device *dev)
+{
+ struct veth_priv *priv = netdev_priv(dev);
+
+ netif_carrier_off(dev);
+ netif_carrier_off(priv->peer);
+
+ return 0;
+}
+
static int veth_dev_init(struct net_device *dev)
{
struct veth_net_stats *stats;
static const struct net_device_ops veth_netdev_ops = {
.ndo_init = veth_dev_init,
.ndo_open = veth_open,
+ .ndo_stop = veth_close,
.ndo_start_xmit = veth_xmit,
.ndo_get_stats = veth_get_stats,
.ndo_set_mac_address = eth_mac_addr,
dev->destructor = veth_dev_free;
}
-static void veth_change_state(struct net_device *dev)
-{
- struct net_device *peer;
- struct veth_priv *priv;
-
- priv = netdev_priv(dev);
- peer = priv->peer;
-
- if (netif_carrier_ok(peer)) {
- if (!netif_carrier_ok(dev))
- netif_carrier_on(dev);
- } else {
- if (netif_carrier_ok(dev))
- netif_carrier_off(dev);
- }
-}
-
-static int veth_device_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
-{
- struct net_device *dev = ptr;
-
- if (dev->netdev_ops->ndo_open != veth_open)
- goto out;
-
- switch (event) {
- case NETDEV_CHANGE:
- veth_change_state(dev);
- break;
- }
-out:
- return NOTIFY_DONE;
-}
-
-static struct notifier_block veth_notifier_block __read_mostly = {
- .notifier_call = veth_device_event,
-};
-
/*
* netlink interface
*/
static __init int veth_init(void)
{
- register_netdevice_notifier(&veth_notifier_block);
return rtnl_link_register(&veth_link_ops);
}
static __exit void veth_exit(void)
{
rtnl_link_unregister(&veth_link_ops);
- unregister_netdevice_notifier(&veth_notifier_block);
}
module_init(veth_init);
bad:
if (ah)
ath9k_hw_detach(ah);
+ ath9k_exit_debug(sc);
return error;
}
static int ath_attach(u16 devid, struct ath_softc *sc)
{
struct ieee80211_hw *hw = sc->hw;
- int error = 0;
+ int error = 0, i;
DPRINTF(sc, ATH_DBG_CONFIG, "Attach ATH hw\n");
/* initialize tx/rx engine */
error = ath_tx_init(sc, ATH_TXBUF);
if (error != 0)
- goto detach;
+ goto error_attach;
error = ath_rx_init(sc, ATH_RXBUF);
if (error != 0)
- goto detach;
+ goto error_attach;
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
/* Initialze h/w Rfkill */
INIT_DELAYED_WORK(&sc->rf_kill.rfkill_poll, ath_rfkill_poll);
/* Initialize s/w rfkill */
- if (ath_init_sw_rfkill(sc))
- goto detach;
+ error = ath_init_sw_rfkill(sc);
+ if (error)
+ goto error_attach;
#endif
error = ieee80211_register_hw(hw);
ath_init_leds(sc);
return 0;
-detach:
- ath_detach(sc);
+
+error_attach:
+ /* cleanup tx queues */
+ for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++)
+ if (ATH_TXQ_SETUP(sc, i))
+ ath_tx_cleanupq(sc, &sc->tx.txq[i]);
+
+ ath9k_hw_detach(sc->sc_ah);
+ ath9k_exit_debug(sc);
+
return error;
}
pci_unmap_single(dev,
pci_unmap_addr(&txq->cmd[index]->meta, mapping),
pci_unmap_len(&txq->cmd[index]->meta, len),
- PCI_DMA_TODEVICE);
+ PCI_DMA_BIDIRECTIONAL);
/* Unmap chunks, if any. */
for (i = 1; i < num_tbs; i++) {
* within command buffer array. */
txcmd_phys = pci_map_single(priv->pci_dev,
out_cmd, sizeof(struct iwl_cmd),
- PCI_DMA_TODEVICE);
+ PCI_DMA_BIDIRECTIONAL);
pci_unmap_addr_set(&out_cmd->meta, mapping, txcmd_phys);
pci_unmap_len_set(&out_cmd->meta, len, sizeof(struct iwl_cmd));
/* Add buffer containing Tx command and MAC(!) header to TFD's
IWL_MAX_SCAN_SIZE : sizeof(struct iwl_cmd);
phys_addr = pci_map_single(priv->pci_dev, out_cmd,
- len, PCI_DMA_TODEVICE);
+ len, PCI_DMA_BIDIRECTIONAL);
pci_unmap_addr_set(&out_cmd->meta, mapping, phys_addr);
pci_unmap_len_set(&out_cmd->meta, len, len);
phys_addr += offsetof(struct iwl_cmd, hdr);
pci_unmap_single(priv->pci_dev,
pci_unmap_addr(&txq->cmd[cmd_idx]->meta, mapping),
pci_unmap_len(&txq->cmd[cmd_idx]->meta, len),
- PCI_DMA_TODEVICE);
+ PCI_DMA_BIDIRECTIONAL);
for (idx = iwl_queue_inc_wrap(idx, q->n_bd); q->read_ptr != idx;
q->read_ptr = iwl_queue_inc_wrap(q->read_ptr, q->n_bd)) {
static void lbs_ethtool_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
snprintf(info->fw_version, 32, "%u.%u.%u.p%u",
priv->fwrelease >> 24 & 0xff,
static int lbs_ethtool_get_eeprom(struct net_device *dev,
struct ethtool_eeprom *eeprom, u8 * bytes)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct cmd_ds_802_11_eeprom_access cmd;
int ret;
static void lbs_ethtool_get_stats(struct net_device *dev,
struct ethtool_stats *stats, uint64_t *data)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
static int lbs_ethtool_get_sset_count(struct net_device *dev, int sset)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
if (sset == ETH_SS_STATS && dev == priv->mesh_dev)
return MESH_STATS_NUM;
static void lbs_ethtool_get_wol(struct net_device *dev,
struct ethtool_wolinfo *wol)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
if (priv->wol_criteria == 0xffffffff) {
/* Interface driver didn't configure wake */
static int lbs_ethtool_set_wol(struct net_device *dev,
struct ethtool_wolinfo *wol)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
uint32_t criteria = 0;
if (priv->wol_criteria == 0xffffffff && wol->wolopts)
static ssize_t if_usb_firmware_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct if_usb_card *cardp = priv->card;
char fwname[FIRMWARE_NAME_MAX];
int ret;
static ssize_t if_usb_boot2_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct if_usb_card *cardp = priv->card;
char fwname[FIRMWARE_NAME_MAX];
int ret;
static ssize_t lbs_anycast_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
static ssize_t lbs_anycast_set(struct device *dev,
struct device_attribute *attr, const char * buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
uint32_t datum;
int ret;
static ssize_t lbs_prb_rsp_limit_get(struct device *dev,
struct device_attribute *attr, char *buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
u32 retry_limit;
static ssize_t lbs_prb_rsp_limit_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_access mesh_access;
int ret;
unsigned long retry_limit;
static ssize_t lbs_rtap_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
return snprintf(buf, 5, "0x%X\n", priv->monitormode);
}
struct device_attribute *attr, const char * buf, size_t count)
{
int monitor_mode;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
sscanf(buf, "%x", &monitor_mode);
if (monitor_mode) {
static ssize_t lbs_mesh_get(struct device *dev,
struct device_attribute *attr, char * buf)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
return snprintf(buf, 5, "0x%X\n", !!priv->mesh_dev);
}
static ssize_t lbs_mesh_set(struct device *dev,
struct device_attribute *attr, const char * buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
int enable;
int ret, action = CMD_ACT_MESH_CONFIG_STOP;
*/
static int lbs_dev_open(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev) ;
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_NET);
*/
static int lbs_eth_stop(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_NET);
static void lbs_tx_timeout(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_TX);
*/
static struct net_device_stats *lbs_get_stats(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_NET);
return &priv->stats;
static int lbs_set_mac_address(struct net_device *dev, void *addr)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct sockaddr *phwaddr = addr;
struct cmd_ds_802_11_mac_address cmd;
static void lbs_set_multicast_list(struct net_device *dev)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
schedule_work(&priv->mcast_work);
}
static int lbs_thread(void *data)
{
struct net_device *dev = data;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
wait_queue_t wait;
lbs_deb_enter(LBS_DEB_THREAD);
goto done;
}
priv = netdev_priv(dev);
+ dev->ml_priv = priv;
if (lbs_init_adapter(priv)) {
lbs_pr_err("failed to initialize adapter structure.\n");
static int mesh_get_default_parameters(struct device *dev,
struct mrvl_mesh_defaults *defs)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
int ret;
static ssize_t bootflag_set(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
static ssize_t boottime_set(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
static ssize_t channel_set(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
struct cmd_ds_mesh_config cmd;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
int len;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
struct cmd_ds_mesh_config cmd;
struct mrvl_mesh_defaults defs;
struct mrvl_meshie *ie;
- struct lbs_private *priv = netdev_priv(to_net_dev(dev));
+ struct lbs_private *priv = to_net_dev(dev)->ml_priv;
uint32_t datum;
int ret;
union iwreq_data *wrqu, char *extra)
{
DECLARE_SSID_BUF(ssid);
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_point *dwrq, char *extra)
{
#define SCAN_ITEM_SIZE 128
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int err = 0;
char *ev = extra;
char *stop = ev + dwrq->length;
int lbs_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
unsigned long flags;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct txpd *txpd;
char *p802x_hdr;
uint16_t pkt_len;
static int lbs_get_freq(struct net_device *dev, struct iw_request_info *info,
struct iw_freq *fwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_wap(struct net_device *dev, struct iw_request_info *info,
struct sockaddr *awrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int mesh_get_nick(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *vwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u32 val = vwrq->value;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_rts(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
static int lbs_set_frag(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u32 val = vwrq->value;
static int lbs_get_frag(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
static int lbs_get_mode(struct net_device *dev,
struct iw_request_info *info, u32 * uwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
s16 curlevel = 0;
int ret = 0;
static int lbs_set_retry(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 slimit = 0, llimit = 0;
static int lbs_get_retry(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u16 val = 0;
struct iw_point *dwrq, char *extra)
{
int i, j;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_range *range = (struct iw_range *)extra;
struct chan_freq_power *cfp;
u8 rates[MAX_RATES + 1];
static int lbs_set_power(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_power(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
EXCELLENT = 95,
PERFECT = 100
};
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u32 rssi_qual;
u32 tx_qual;
u32 quality = 0;
struct iw_freq *fwrq, char *extra)
{
int ret = -EINVAL;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
struct assoc_request * assoc_req;
struct iw_request_info *info,
struct iw_freq *fwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct chan_freq_power *cfp;
int ret = -EINVAL;
static int lbs_set_rate(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
u8 new_rate = 0;
int ret = -EINVAL;
u8 rates[MAX_RATES + 1];
static int lbs_get_rate(struct net_device *dev, struct iw_request_info *info,
struct iw_param *vwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info, u32 * uwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_point *dwrq, u8 * extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int index = (dwrq->flags & IW_ENCODE_INDEX) - 1;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_point *dwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
u16 is_default = 0, index = 0, set_tx_key = 0;
char *extra)
{
int ret = -EINVAL;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
int index, max_key_len;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct iw_encode_ext *ext = (struct iw_encode_ext *)extra;
int alg = ext->alg;
struct assoc_request * assoc_req;
struct iw_point *dwrq,
char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
struct assoc_request * assoc_req;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *dwrq,
char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
int ret = 0;
int updated = 0;
char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_param *vwrq, char *extra)
{
int ret = 0;
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
s16 dbm = (s16) vwrq->value;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_get_essid(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_essid(struct net_device *dev, struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
u8 ssid[IW_ESSID_MAX_SIZE];
u8 ssid_len = 0;
struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
lbs_deb_enter(LBS_DEB_WEXT);
struct iw_request_info *info,
struct iw_point *dwrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
int ret = 0;
lbs_deb_enter(LBS_DEB_WEXT);
static int lbs_set_wap(struct net_device *dev, struct iw_request_info *info,
struct sockaddr *awrq, char *extra)
{
- struct lbs_private *priv = netdev_priv(dev);
+ struct lbs_private *priv = dev->ml_priv;
struct assoc_request * assoc_req;
int ret = 0;
return NOTIFY_DONE;
}
+
+static void orinoco_register_pm_notifier(struct orinoco_private *priv)
+{
+ priv->pm_notifier.notifier_call = orinoco_pm_notifier;
+ register_pm_notifier(&priv->pm_notifier);
+}
+
+static void orinoco_unregister_pm_notifier(struct orinoco_private *priv)
+{
+ unregister_pm_notifier(&priv->pm_notifier);
+}
#else /* !PM_SLEEP || HERMES_CACHE_FW_ON_INIT */
-#define orinoco_pm_notifier NULL
+#define orinoco_register_pm_notifier(priv) do { } while(0)
+#define orinoco_unregister_pm_notifier(priv) do { } while(0)
#endif
/********************************************************************/
priv->cached_fw = NULL;
/* Register PM notifiers */
- priv->pm_notifier.notifier_call = orinoco_pm_notifier;
- register_pm_notifier(&priv->pm_notifier);
+ orinoco_register_pm_notifier(priv);
return dev;
}
kfree(rx_data);
}
- unregister_pm_notifier(&priv->pm_notifier);
+ orinoco_unregister_pm_notifier(priv);
orinoco_uncache_fw(priv);
priv->wpa_ie_len = 0;
{USB_DEVICE(0x0bda, 0x8189), .driver_info = DEVICE_RTL8187B},
{USB_DEVICE(0x0bda, 0x8197), .driver_info = DEVICE_RTL8187B},
{USB_DEVICE(0x0bda, 0x8198), .driver_info = DEVICE_RTL8187B},
+ /* Surecom */
+ {USB_DEVICE(0x0769, 0x11F2), .driver_info = DEVICE_RTL8187},
+ /* Logitech */
+ {USB_DEVICE(0x0789, 0x010C), .driver_info = DEVICE_RTL8187},
/* Netgear */
{USB_DEVICE(0x0846, 0x6100), .driver_info = DEVICE_RTL8187},
{USB_DEVICE(0x0846, 0x6a00), .driver_info = DEVICE_RTL8187},
/* Sitecom */
{USB_DEVICE(0x0df6, 0x000d), .driver_info = DEVICE_RTL8187},
{USB_DEVICE(0x0df6, 0x0028), .driver_info = DEVICE_RTL8187B},
+ /* Sphairon Access Systems GmbH */
+ {USB_DEVICE(0x114B, 0x0150), .driver_info = DEVICE_RTL8187},
+ /* Dick Smith Electronics */
+ {USB_DEVICE(0x1371, 0x9401), .driver_info = DEVICE_RTL8187},
/* Abocom */
{USB_DEVICE(0x13d1, 0xabe6), .driver_info = DEVICE_RTL8187},
+ /* Qcom */
+ {USB_DEVICE(0x18E8, 0x6232), .driver_info = DEVICE_RTL8187},
+ /* AirLive */
+ {USB_DEVICE(0x1b75, 0x8187), .driver_info = DEVICE_RTL8187},
{}
};
entry_header = (struct acpi_dmar_header *)(dmar + 1);
while (((unsigned long)entry_header) <
(((unsigned long)dmar) + dmar_tbl->length)) {
+ /* Avoid looping forever on bad ACPI tables */
+ if (entry_header->length == 0) {
+ printk(KERN_WARNING PREFIX
+ "Invalid 0-length structure\n");
+ ret = -EINVAL;
+ break;
+ }
+
dmar_table_print_dmar_entry(entry_header);
switch (entry_header->type) {
int map_size;
u32 ver;
static int iommu_allocated = 0;
- int agaw;
+ int agaw = 0;
iommu = kzalloc(sizeof(*iommu), GFP_KERNEL);
if (!iommu)
iommu->cap = dmar_readq(iommu->reg + DMAR_CAP_REG);
iommu->ecap = dmar_readq(iommu->reg + DMAR_ECAP_REG);
+#ifdef CONFIG_DMAR
agaw = iommu_calculate_agaw(iommu);
if (agaw < 0) {
printk(KERN_ERR
iommu->seq_id);
goto error;
}
+#endif
iommu->agaw = agaw;
/* the registers might be more than one page */
}
}
+static int qi_check_fault(struct intel_iommu *iommu, int index)
+{
+ u32 fault;
+ int head;
+ struct q_inval *qi = iommu->qi;
+ int wait_index = (index + 1) % QI_LENGTH;
+
+ fault = readl(iommu->reg + DMAR_FSTS_REG);
+
+ /*
+ * If IQE happens, the head points to the descriptor associated
+ * with the error. No new descriptors are fetched until the IQE
+ * is cleared.
+ */
+ if (fault & DMA_FSTS_IQE) {
+ head = readl(iommu->reg + DMAR_IQH_REG);
+ if ((head >> 4) == index) {
+ memcpy(&qi->desc[index], &qi->desc[wait_index],
+ sizeof(struct qi_desc));
+ __iommu_flush_cache(iommu, &qi->desc[index],
+ sizeof(struct qi_desc));
+ writel(DMA_FSTS_IQE, iommu->reg + DMAR_FSTS_REG);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
/*
* Submit the queued invalidation descriptor to the remapping
* hardware unit and wait for its completion.
*/
-void qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu)
+int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu)
{
+ int rc = 0;
struct q_inval *qi = iommu->qi;
struct qi_desc *hw, wait_desc;
int wait_index, index;
unsigned long flags;
if (!qi)
- return;
+ return 0;
hw = qi->desc;
hw[index] = *desc;
- wait_desc.low = QI_IWD_STATUS_DATA(2) | QI_IWD_STATUS_WRITE | QI_IWD_TYPE;
+ wait_desc.low = QI_IWD_STATUS_DATA(QI_DONE) |
+ QI_IWD_STATUS_WRITE | QI_IWD_TYPE;
wait_desc.high = virt_to_phys(&qi->desc_status[wait_index]);
hw[wait_index] = wait_desc;
qi->free_head = (qi->free_head + 2) % QI_LENGTH;
qi->free_cnt -= 2;
- spin_lock(&iommu->register_lock);
/*
* update the HW tail register indicating the presence of
* new descriptors.
*/
writel(qi->free_head << 4, iommu->reg + DMAR_IQT_REG);
- spin_unlock(&iommu->register_lock);
while (qi->desc_status[wait_index] != QI_DONE) {
/*
* a deadlock where the interrupt context can wait indefinitely
* for free slots in the queue.
*/
+ rc = qi_check_fault(iommu, index);
+ if (rc)
+ goto out;
+
spin_unlock(&qi->q_lock);
cpu_relax();
spin_lock(&qi->q_lock);
}
-
- qi->desc_status[index] = QI_DONE;
+out:
+ qi->desc_status[index] = qi->desc_status[wait_index] = QI_DONE;
reclaim_free_desc(qi);
spin_unlock_irqrestore(&qi->q_lock, flags);
+
+ return rc;
}
/*
desc.low = QI_IEC_TYPE;
desc.high = 0;
+ /* should never fail */
qi_submit_sync(&desc, iommu);
}
int qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid, u8 fm,
u64 type, int non_present_entry_flush)
{
-
struct qi_desc desc;
if (non_present_entry_flush) {
| QI_CC_GRAN(type) | QI_CC_TYPE;
desc.high = 0;
- qi_submit_sync(&desc, iommu);
-
- return 0;
-
+ return qi_submit_sync(&desc, iommu);
}
int qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
desc.high = QI_IOTLB_ADDR(addr) | QI_IOTLB_IH(ih)
| QI_IOTLB_AM(size_order);
- qi_submit_sync(&desc, iommu);
-
- return 0;
-
+ return qi_submit_sync(&desc, iommu);
}
/*
int cmd_busy;
unsigned int no_cmd_complete:1;
unsigned int link_active_reporting:1;
+ unsigned int notification_enabled:1;
};
#define INT_BUTTON_IGNORE 0
extern int pciehp_unconfigure_device(struct slot *p_slot);
extern void pciehp_queue_pushbutton_work(struct work_struct *work);
struct controller *pcie_init(struct pcie_device *dev);
+int pcie_init_notification(struct controller *ctrl);
int pciehp_enable_slot(struct slot *p_slot);
int pciehp_disable_slot(struct slot *p_slot);
int pcie_enable_notification(struct controller *ctrl);
goto err_out_release_ctlr;
}
+ /* Enable events after we have setup the data structures */
+ rc = pcie_init_notification(ctrl);
+ if (rc) {
+ ctrl_err(ctrl, "Notification initialization failed\n");
+ goto err_out_release_ctlr;
+ }
+
/* Check if slot is occupied */
t_slot = pciehp_find_slot(ctrl, ctrl->slot_device_offset);
t_slot->hpc_ops->get_adapter_status(t_slot, &value);
ctrl_warn(ctrl, "Cannot disable software notification\n");
}
-static int pcie_init_notification(struct controller *ctrl)
+int pcie_init_notification(struct controller *ctrl)
{
if (pciehp_request_irq(ctrl))
return -1;
pciehp_free_irq(ctrl);
return -1;
}
+ ctrl->notification_enabled = 1;
return 0;
}
static void pcie_shutdown_notification(struct controller *ctrl)
{
- pcie_disable_notification(ctrl);
- pciehp_free_irq(ctrl);
+ if (ctrl->notification_enabled) {
+ pcie_disable_notification(ctrl);
+ pciehp_free_irq(ctrl);
+ ctrl->notification_enabled = 0;
+ }
}
static int pcie_init_slot(struct controller *ctrl)
if (pcie_init_slot(ctrl))
goto abort_ctrl;
- if (pcie_init_notification(ctrl))
- goto abort_slot;
-
return ctrl;
-abort_slot:
- pcie_cleanup_slot(ctrl);
abort_ctrl:
kfree(ctrl);
abort:
return index;
}
-static void qi_flush_iec(struct intel_iommu *iommu, int index, int mask)
+static int qi_flush_iec(struct intel_iommu *iommu, int index, int mask)
{
struct qi_desc desc;
| QI_IEC_SELECTIVE;
desc.high = 0;
- qi_submit_sync(&desc, iommu);
+ return qi_submit_sync(&desc, iommu);
}
int map_irq_to_irte_handle(int irq, u16 *sub_handle)
int modify_irte(int irq, struct irte *irte_modified)
{
+ int rc;
int index;
struct irte *irte;
struct intel_iommu *iommu;
set_64bit((unsigned long *)irte, irte_modified->low | (1 << 1));
__iommu_flush_cache(iommu, irte, sizeof(*irte));
- qi_flush_iec(iommu, index, 0);
-
+ rc = qi_flush_iec(iommu, index, 0);
spin_unlock(&irq_2_ir_lock);
- return 0;
+
+ return rc;
}
int flush_irte(int irq)
{
+ int rc;
int index;
struct intel_iommu *iommu;
struct irq_2_iommu *irq_iommu;
index = irq_iommu->irte_index + irq_iommu->sub_handle;
- qi_flush_iec(iommu, index, irq_iommu->irte_mask);
+ rc = qi_flush_iec(iommu, index, irq_iommu->irte_mask);
spin_unlock(&irq_2_ir_lock);
- return 0;
+ return rc;
}
struct intel_iommu *map_ioapic_to_ir(int apic)
int free_irte(int irq)
{
+ int rc = 0;
int index, i;
struct irte *irte;
struct intel_iommu *iommu;
if (!irq_iommu->sub_handle) {
for (i = 0; i < (1 << irq_iommu->irte_mask); i++)
set_64bit((unsigned long *)irte, 0);
- qi_flush_iec(iommu, index, irq_iommu->irte_mask);
+ rc = qi_flush_iec(iommu, index, irq_iommu->irte_mask);
}
irq_iommu->iommu = NULL;
spin_unlock(&irq_2_ir_lock);
- return 0;
+ return rc;
}
static void iommu_set_intr_remapping(struct intel_iommu *iommu, int mode)
}
#endif /* 0 */
+
+static void set_device_error_reporting(struct pci_dev *dev, void *data)
+{
+ bool enable = *((bool *)data);
+
+ if (dev->pcie_type != PCIE_RC_PORT &&
+ dev->pcie_type != PCIE_SW_UPSTREAM_PORT &&
+ dev->pcie_type != PCIE_SW_DOWNSTREAM_PORT)
+ return;
+
+ if (enable)
+ pci_enable_pcie_error_reporting(dev);
+ else
+ pci_disable_pcie_error_reporting(dev);
+}
+
+/**
+ * set_downstream_devices_error_reporting - enable/disable the error reporting bits on the root port and its downstream ports.
+ * @dev: pointer to root port's pci_dev data structure
+ * @enable: true = enable error reporting, false = disable error reporting.
+ */
+static void set_downstream_devices_error_reporting(struct pci_dev *dev,
+ bool enable)
+{
+ set_device_error_reporting(dev, &enable);
+ pci_walk_bus(dev->subordinate, set_device_error_reporting, &enable);
+}
+
static int find_device_iter(struct device *device, void *data)
{
struct pci_dev *dev;
pci_read_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, ®32);
pci_write_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, reg32);
- /* Enable Root Port device reporting error itself */
- pci_read_config_word(pdev, pos+PCI_EXP_DEVCTL, ®16);
- reg16 = reg16 |
- PCI_EXP_DEVCTL_CERE |
- PCI_EXP_DEVCTL_NFERE |
- PCI_EXP_DEVCTL_FERE |
- PCI_EXP_DEVCTL_URRE;
- pci_write_config_word(pdev, pos+PCI_EXP_DEVCTL,
- reg16);
+ /*
+ * Enable error reporting for the root port device and downstream port
+ * devices.
+ */
+ set_downstream_devices_error_reporting(pdev, true);
/* Enable Root Port's interrupt in response to error messages */
pci_write_config_dword(pdev,
u32 reg32;
int pos;
+ /*
+ * Disable error reporting for the root port device and downstream port
+ * devices.
+ */
+ set_downstream_devices_error_reporting(pdev, false);
+
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
/* Disable Root's interrupt in response to error messages */
pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_COMMAND, 0);
pcie_portdrv_save_config(dev);
- pci_enable_pcie_error_reporting(dev);
-
return 0;
}
*/
#define AMD_813X_MISC 0x40
#define AMD_813X_NOIOAMODE (1<<0)
+#define AMD_813X_REV_B2 0x13
static void quirk_disable_amd_813x_boot_interrupt(struct pci_dev *dev)
{
if (noioapicquirk)
return;
+ if (dev->revision == AMD_813X_REV_B2)
+ return;
pci_read_config_dword(dev, AMD_813X_MISC, &pci_config_dword);
pci_config_dword &= ~AMD_813X_NOIOAMODE;
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
quirk_msi_ht_cap);
-
/* The nVidia CK804 chipset may have 2 HT MSI mappings.
* MSI are supported if the MSI capability set in any of these mappings.
*/
PCI_DEVICE_ID_SERVERWORKS_HT1000_PXB,
ht_enable_msi_mapping);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8132_BRIDGE,
+ ht_enable_msi_mapping);
+
/* The P5N32-SLI Premium motherboard from Asus has a problem with msi
* for the MCP55 NIC. It is not yet determined whether the msi problem
* also affects other devices. As for now, turn off msi for this device.
PCI_DEVICE_ID_NVIDIA_NVENET_15,
nvenet_msi_disable);
-static void __devinit nv_msi_ht_cap_quirk(struct pci_dev *dev)
+static void __devinit nv_ht_enable_msi_mapping(struct pci_dev *dev)
{
struct pci_dev *host_bridge;
+ int pos;
+ int i, dev_no;
+ int found = 0;
+
+ dev_no = dev->devfn >> 3;
+ for (i = dev_no; i >= 0; i--) {
+ host_bridge = pci_get_slot(dev->bus, PCI_DEVFN(i, 0));
+ if (!host_bridge)
+ continue;
+
+ pos = pci_find_ht_capability(host_bridge, HT_CAPTYPE_SLAVE);
+ if (pos != 0) {
+ found = 1;
+ break;
+ }
+ pci_dev_put(host_bridge);
+ }
+
+ if (!found)
+ return;
+
+ /* root did that ! */
+ if (msi_ht_cap_enabled(host_bridge))
+ goto out;
+
+ ht_enable_msi_mapping(dev);
+
+out:
+ pci_dev_put(host_bridge);
+}
+
+static void __devinit ht_disable_msi_mapping(struct pci_dev *dev)
+{
+ int pos, ttl = 48;
+
+ pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING);
+ while (pos && ttl--) {
+ u8 flags;
+
+ if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS,
+ &flags) == 0) {
+ dev_info(&dev->dev, "Enabling HT MSI Mapping\n");
+
+ pci_write_config_byte(dev, pos + HT_MSI_FLAGS,
+ flags & ~HT_MSI_FLAGS_ENABLE);
+ }
+ pos = pci_find_next_ht_capability(dev, pos,
+ HT_CAPTYPE_MSI_MAPPING);
+ }
+}
+
+static int __devinit ht_check_msi_mapping(struct pci_dev *dev)
+{
int pos, ttl = 48;
+ int found = 0;
+
+ /* check if there is HT MSI cap or enabled on this device */
+ pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING);
+ while (pos && ttl--) {
+ u8 flags;
+
+ if (found < 1)
+ found = 1;
+ if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS,
+ &flags) == 0) {
+ if (flags & HT_MSI_FLAGS_ENABLE) {
+ if (found < 2) {
+ found = 2;
+ break;
+ }
+ }
+ }
+ pos = pci_find_next_ht_capability(dev, pos,
+ HT_CAPTYPE_MSI_MAPPING);
+ }
+
+ return found;
+}
+
+static void __devinit nv_msi_ht_cap_quirk(struct pci_dev *dev)
+{
+ struct pci_dev *host_bridge;
+ int pos;
+ int found;
+
+ /* check if there is HT MSI cap or enabled on this device */
+ found = ht_check_msi_mapping(dev);
+
+ /* no HT MSI CAP */
+ if (found == 0)
+ return;
/*
* HT MSI mapping should be disabled on devices that are below
pos = pci_find_ht_capability(host_bridge, HT_CAPTYPE_SLAVE);
if (pos != 0) {
/* Host bridge is to HT */
- ht_enable_msi_mapping(dev);
+ if (found == 1) {
+ /* it is not enabled, try to enable it */
+ nv_ht_enable_msi_mapping(dev);
+ }
return;
}
- /* Host bridge is not to HT, disable HT MSI mapping on this device */
- pos = pci_find_ht_capability(dev, HT_CAPTYPE_MSI_MAPPING);
- while (pos && ttl--) {
- u8 flags;
+ /* HT MSI is not enabled */
+ if (found == 1)
+ return;
- if (pci_read_config_byte(dev, pos + HT_MSI_FLAGS,
- &flags) == 0) {
- dev_info(&dev->dev, "Disabling HT MSI mapping");
- pci_write_config_byte(dev, pos + HT_MSI_FLAGS,
- flags & ~HT_MSI_FLAGS_ENABLE);
- }
- pos = pci_find_next_ht_capability(dev, pos,
- HT_CAPTYPE_MSI_MAPPING);
- }
+ /* Host bridge is not to HT, disable HT MSI mapping on this device */
+ ht_disable_msi_mapping(dev);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, nv_msi_ht_cap_quirk);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AL, PCI_ANY_ID, nv_msi_ht_cap_quirk);
#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
#include <scsi/libiscsi_tcp.h>
/* from cxgb3 LLD */
struct cxgb3i_conn *cconn;
};
+/**
+ * struct cxgb3i_task_data - private iscsi task data
+ *
+ * @nr_frags: # of coalesced page frags (from scsi sgl)
+ * @frags: coalesced page frags (from scsi sgl)
+ * @skb: tx pdu skb
+ * @offset: data offset for the next pdu
+ * @count: max. possible pdu payload
+ * @sgoffset: offset to the first sg entry for a given offset
+ */
+#define MAX_PDU_FRAGS ((ULP2_MAX_PDU_PAYLOAD + 512 - 1) / 512)
+struct cxgb3i_task_data {
+ unsigned short nr_frags;
+ skb_frag_t frags[MAX_PDU_FRAGS];
+ struct sk_buff *skb;
+ unsigned int offset;
+ unsigned int count;
+ unsigned int sgoffset;
+};
+
int cxgb3i_iscsi_init(void);
void cxgb3i_iscsi_cleanup(void);
write_unlock(&cxgb3i_ddp_rwlock);
ddp_log_info("nppods %u (0x%x ~ 0x%x), bits %u, mask 0x%x,0x%x "
- "pkt %u,%u.\n",
+ "pkt %u/%u, %u/%u.\n",
ppmax, ddp->llimit, ddp->ulimit, ddp->idx_bits,
ddp->idx_mask, ddp->rsvd_tag_mask,
- ddp->max_txsz, ddp->max_rxsz);
+ ddp->max_txsz, uinfo.max_txsz,
+ ddp->max_rxsz, uinfo.max_rxsz);
return 0;
free_ddp_map:
* cxgb3i_adapter_ddp_init - initialize the adapter's ddp resource
* @tdev: t3cdev adapter
* @tformat: tag format
- * @txsz: max tx pkt size, filled in by this func.
- * @rxsz: max rx pkt size, filled in by this func.
+ * @txsz: max tx pdu payload size, filled in by this func.
+ * @rxsz: max rx pdu payload size, filled in by this func.
* initialize the ddp pagepod manager for a given adapter if needed and
* setup the tag format for a given iscsi entity
*/
tformat->sw_bits, tformat->rsvd_bits,
tformat->rsvd_shift, tformat->rsvd_mask);
- *txsz = ddp->max_txsz;
- *rxsz = ddp->max_rxsz;
- ddp_log_info("ddp max pkt size: %u, %u.\n",
- ddp->max_txsz, ddp->max_rxsz);
+ *txsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
+ ddp->max_txsz - ISCSI_PDU_NONPAYLOAD_LEN);
+ *rxsz = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
+ ddp->max_rxsz - ISCSI_PDU_NONPAYLOAD_LEN);
+ ddp_log_info("max payload size: %u/%u, %u/%u.\n",
+ *txsz, ddp->max_txsz, *rxsz, ddp->max_rxsz);
return 0;
}
EXPORT_SYMBOL_GPL(cxgb3i_adapter_ddp_init);
#ifndef __CXGB3I_ULP2_DDP_H__
#define __CXGB3I_ULP2_DDP_H__
+#include <linux/vmalloc.h>
+
/**
* struct cxgb3i_tag_format - cxgb3i ulp tag format for an iscsi entity
*
struct sk_buff **gl_skb;
};
+#define ISCSI_PDU_NONPAYLOAD_LEN 312 /* bhs(48) + ahs(256) + digest(8) */
#define ULP2_MAX_PKT_SIZE 16224
-#define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_MAX)
+#define ULP2_MAX_PDU_PAYLOAD (ULP2_MAX_PKT_SIZE - ISCSI_PDU_NONPAYLOAD_LEN)
#define PPOD_PAGES_MAX 4
#define PPOD_PAGES_SHIFT 2 /* 4 pages per pod */
#include "cxgb3i.h"
#define DRV_MODULE_NAME "cxgb3i"
-#define DRV_MODULE_VERSION "1.0.0"
-#define DRV_MODULE_RELDATE "Jun. 1, 2008"
+#define DRV_MODULE_VERSION "1.0.1"
+#define DRV_MODULE_RELDATE "Jan. 2009"
static char version[] =
"Chelsio S3xx iSCSI Driver " DRV_MODULE_NAME
cls_session = iscsi_session_setup(&cxgb3i_iscsi_transport, shost,
cmds_max,
- sizeof(struct iscsi_tcp_task),
+ sizeof(struct iscsi_tcp_task) +
+ sizeof(struct cxgb3i_task_data),
initial_cmdsn, ISCSI_MAX_TARGET);
if (!cls_session)
return NULL;
{
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
- unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
- cconn->hba->snic->tx_max_size -
- ISCSI_PDU_NONPAYLOAD_MAX);
+ unsigned int max = max(512 * MAX_SKB_FRAGS, SKB_TX_HEADROOM);
+ max = min(cconn->hba->snic->tx_max_size, max);
if (conn->max_xmit_dlength)
- conn->max_xmit_dlength = min_t(unsigned int,
- conn->max_xmit_dlength, max);
+ conn->max_xmit_dlength = min(conn->max_xmit_dlength, max);
else
conn->max_xmit_dlength = max;
align_pdu_size(conn->max_xmit_dlength);
- cxgb3i_log_info("conn 0x%p, max xmit %u.\n",
+ cxgb3i_api_debug("conn 0x%p, max xmit %u.\n",
conn, conn->max_xmit_dlength);
return 0;
}
{
struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
- unsigned int max = min_t(unsigned int, ULP2_MAX_PDU_PAYLOAD,
- cconn->hba->snic->rx_max_size -
- ISCSI_PDU_NONPAYLOAD_MAX);
+ unsigned int max = cconn->hba->snic->rx_max_size;
align_pdu_size(max);
if (conn->max_recv_dlength) {
conn->max_recv_dlength, max);
return -EINVAL;
}
- conn->max_recv_dlength = min_t(unsigned int,
- conn->max_recv_dlength, max);
+ conn->max_recv_dlength = min(conn->max_recv_dlength, max);
align_pdu_size(conn->max_recv_dlength);
} else
conn->max_recv_dlength = max;
.proc_name = "cxgb3i",
.queuecommand = iscsi_queuecommand,
.change_queue_depth = iscsi_change_queue_depth,
- .can_queue = 128 * (ISCSI_DEF_XMIT_CMDS_MAX - 1),
+ .can_queue = CXGB3I_SCSI_QDEPTH_DFLT - 1,
.sg_tablesize = SG_ALL,
.max_sectors = 0xFFFF,
.cmd_per_lun = ISCSI_DEF_CMD_PER_LUN,
#include "cxgb3i_ddp.h"
#ifdef __DEBUG_C3CN_CONN__
-#define c3cn_conn_debug cxgb3i_log_info
+#define c3cn_conn_debug cxgb3i_log_debug
#else
#define c3cn_conn_debug(fmt...)
#endif
#ifdef __DEBUG_C3CN_TX__
-#define c3cn_tx_debug cxgb3i_log_debug
+#define c3cn_tx_debug cxgb3i_log_debug
#else
#define c3cn_tx_debug(fmt...)
#endif
#ifdef __DEBUG_C3CN_RX__
-#define c3cn_rx_debug cxgb3i_log_debug
+#define c3cn_rx_debug cxgb3i_log_debug
#else
#define c3cn_rx_debug(fmt...)
#endif
module_param(cxgb3_rcv_win, int, 0644);
MODULE_PARM_DESC(cxgb3_rcv_win, "TCP receive window in bytes (default=256KB)");
-static int cxgb3_snd_win = 64 * 1024;
+static int cxgb3_snd_win = 128 * 1024;
module_param(cxgb3_snd_win, int, 0644);
-MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=64KB)");
+MODULE_PARM_DESC(cxgb3_snd_win, "TCP send window in bytes (default=128KB)");
static int cxgb3_rx_credit_thres = 10 * 1024;
module_param(cxgb3_rx_credit_thres, int, 0644);
static void skb_entail(struct s3_conn *c3cn, struct sk_buff *skb,
int flags)
{
- CXGB3_SKB_CB(skb)->seq = c3cn->write_seq;
- CXGB3_SKB_CB(skb)->flags = flags;
+ skb_tcp_seq(skb) = c3cn->write_seq;
+ skb_flags(skb) = flags;
__skb_queue_tail(&c3cn->write_queue, skb);
}
* The number of WRs needed for an skb depends on the number of fragments
* in the skb and whether it has any payload in its main body. This maps the
* length of the gather list represented by an skb into the # of necessary WRs.
- *
- * The max. length of an skb is controlled by the max pdu size which is ~16K.
- * Also, assume the min. fragment length is the sector size (512), then add
- * extra fragment counts for iscsi bhs and payload padding.
+ * The extra two fragments are for iscsi bhs and payload padding.
*/
-#define SKB_WR_LIST_SIZE (16384/512 + 3)
+#define SKB_WR_LIST_SIZE (MAX_SKB_FRAGS + 2)
static unsigned int skb_wrs[SKB_WR_LIST_SIZE] __read_mostly;
static void s3_init_wr_tab(unsigned int wr_len)
static inline void reset_wr_list(struct s3_conn *c3cn)
{
- c3cn->wr_pending_head = NULL;
+ c3cn->wr_pending_head = c3cn->wr_pending_tail = NULL;
}
/*
static inline void enqueue_wr(struct s3_conn *c3cn,
struct sk_buff *skb)
{
- skb_wr_data(skb) = NULL;
+ skb_tx_wr_next(skb) = NULL;
/*
* We want to take an extra reference since both us and the driver
if (!c3cn->wr_pending_head)
c3cn->wr_pending_head = skb;
else
- skb_wr_data(skb) = skb;
+ skb_tx_wr_next(c3cn->wr_pending_tail) = skb;
c3cn->wr_pending_tail = skb;
}
+static int count_pending_wrs(struct s3_conn *c3cn)
+{
+ int n = 0;
+ const struct sk_buff *skb = c3cn->wr_pending_head;
+
+ while (skb) {
+ n += skb->csum;
+ skb = skb_tx_wr_next(skb);
+ }
+ return n;
+}
+
static inline struct sk_buff *peek_wr(const struct s3_conn *c3cn)
{
return c3cn->wr_pending_head;
if (likely(skb)) {
/* Don't bother clearing the tail */
- c3cn->wr_pending_head = skb_wr_data(skb);
- skb_wr_data(skb) = NULL;
+ c3cn->wr_pending_head = skb_tx_wr_next(skb);
+ skb_tx_wr_next(skb) = NULL;
}
return skb;
}
}
static inline void make_tx_data_wr(struct s3_conn *c3cn, struct sk_buff *skb,
- int len)
+ int len, int req_completion)
{
struct tx_data_wr *req;
skb_reset_transport_header(skb);
req = (struct tx_data_wr *)__skb_push(skb, sizeof(*req));
- req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA));
+ req->wr_hi = htonl(V_WR_OP(FW_WROPCODE_OFLD_TX_DATA) |
+ (req_completion ? F_WR_COMPL : 0));
req->wr_lo = htonl(V_WR_TID(c3cn->tid));
req->sndseq = htonl(c3cn->snd_nxt);
/* len includes the length of any HW ULP additions */
if (unlikely(c3cn->state == C3CN_STATE_CONNECTING ||
c3cn->state == C3CN_STATE_CLOSE_WAIT_1 ||
- c3cn->state == C3CN_STATE_ABORTING)) {
+ c3cn->state >= C3CN_STATE_ABORTING)) {
c3cn_tx_debug("c3cn 0x%p, in closing state %u.\n",
c3cn, c3cn->state);
return 0;
if (c3cn->wr_avail < wrs_needed) {
c3cn_tx_debug("c3cn 0x%p, skb len %u/%u, frag %u, "
"wr %d < %u.\n",
- c3cn, skb->len, skb->datalen, frags,
+ c3cn, skb->len, skb->data_len, frags,
wrs_needed, c3cn->wr_avail);
break;
}
c3cn->wr_unacked += wrs_needed;
enqueue_wr(c3cn, skb);
- if (likely(CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_NEED_HDR)) {
- len += ulp_extra_len(skb);
- make_tx_data_wr(c3cn, skb, len);
- c3cn->snd_nxt += len;
- if ((req_completion
- && c3cn->wr_unacked == wrs_needed)
- || (CXGB3_SKB_CB(skb)->flags & C3CB_FLAG_COMPL)
- || c3cn->wr_unacked >= c3cn->wr_max / 2) {
- struct work_request_hdr *wr = cplhdr(skb);
+ c3cn_tx_debug("c3cn 0x%p, enqueue, skb len %u/%u, frag %u, "
+ "wr %d, left %u, unack %u.\n",
+ c3cn, skb->len, skb->data_len, frags,
+ wrs_needed, c3cn->wr_avail, c3cn->wr_unacked);
+
- wr->wr_hi |= htonl(F_WR_COMPL);
+ if (likely(skb_flags(skb) & C3CB_FLAG_NEED_HDR)) {
+ if ((req_completion &&
+ c3cn->wr_unacked == wrs_needed) ||
+ (skb_flags(skb) & C3CB_FLAG_COMPL) ||
+ c3cn->wr_unacked >= c3cn->wr_max / 2) {
+ req_completion = 1;
c3cn->wr_unacked = 0;
}
- CXGB3_SKB_CB(skb)->flags &= ~C3CB_FLAG_NEED_HDR;
+ len += ulp_extra_len(skb);
+ make_tx_data_wr(c3cn, skb, len, req_completion);
+ c3cn->snd_nxt += len;
+ skb_flags(skb) &= ~C3CB_FLAG_NEED_HDR;
}
total_size += skb->truesize;
if (unlikely(c3cn_flag(c3cn, C3CN_ACTIVE_CLOSE_NEEDED)))
/* upper layer has requested closing */
send_abort_req(c3cn);
- else if (c3cn_push_tx_frames(c3cn, 1))
+ else {
+ if (skb_queue_len(&c3cn->write_queue))
+ c3cn_push_tx_frames(c3cn, 1);
cxgb3i_conn_tx_open(c3cn);
+ }
}
static int do_act_establish(struct t3cdev *cdev, struct sk_buff *skb,
return;
}
- CXGB3_SKB_CB(skb)->seq = ntohl(hdr_cpl->seq);
- CXGB3_SKB_CB(skb)->flags = 0;
+ skb_tcp_seq(skb) = ntohl(hdr_cpl->seq);
+ skb_flags(skb) = 0;
skb_reset_transport_header(skb);
__skb_pull(skb, sizeof(struct cpl_iscsi_hdr));
goto abort_conn;
skb_ulp_mode(skb) = ULP2_FLAG_DATA_READY;
- skb_ulp_pdulen(skb) = ntohs(ddp_cpl.len);
- skb_ulp_ddigest(skb) = ntohl(ddp_cpl.ulp_crc);
+ skb_rx_pdulen(skb) = ntohs(ddp_cpl.len);
+ skb_rx_ddigest(skb) = ntohl(ddp_cpl.ulp_crc);
status = ntohl(ddp_cpl.ddp_status);
c3cn_rx_debug("rx skb 0x%p, len %u, pdulen %u, ddp status 0x%x.\n",
- skb, skb->len, skb_ulp_pdulen(skb), status);
+ skb, skb->len, skb_rx_pdulen(skb), status);
if (status & (1 << RX_DDP_STATUS_HCRC_SHIFT))
skb_ulp_mode(skb) |= ULP2_FLAG_HCRC_ERROR;
} else if (status & (1 << RX_DDP_STATUS_DDP_SHIFT))
skb_ulp_mode(skb) |= ULP2_FLAG_DATA_DDPED;
- c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_ulp_pdulen(skb);
+ c3cn->rcv_nxt = ntohl(ddp_cpl.seq) + skb_rx_pdulen(skb);
__pskb_trim(skb, len);
__skb_queue_tail(&c3cn->receive_queue, skb);
cxgb3i_conn_pdu_ready(c3cn);
* Process an acknowledgment of WR completion. Advance snd_una and send the
* next batch of work requests from the write queue.
*/
+static void check_wr_invariants(struct s3_conn *c3cn)
+{
+ int pending = count_pending_wrs(c3cn);
+
+ if (unlikely(c3cn->wr_avail + pending != c3cn->wr_max))
+ cxgb3i_log_error("TID %u: credit imbalance: avail %u, "
+ "pending %u, total should be %u\n",
+ c3cn->tid, c3cn->wr_avail, pending,
+ c3cn->wr_max);
+}
+
static void process_wr_ack(struct s3_conn *c3cn, struct sk_buff *skb)
{
struct cpl_wr_ack *hdr = cplhdr(skb);
unsigned int credits = ntohs(hdr->credits);
u32 snd_una = ntohl(hdr->snd_una);
+ c3cn_tx_debug("%u WR credits, avail %u, unack %u, TID %u, state %u.\n",
+ credits, c3cn->wr_avail, c3cn->wr_unacked,
+ c3cn->tid, c3cn->state);
+
c3cn->wr_avail += credits;
if (c3cn->wr_unacked > c3cn->wr_max - c3cn->wr_avail)
c3cn->wr_unacked = c3cn->wr_max - c3cn->wr_avail;
break;
}
if (unlikely(credits < p->csum)) {
+ struct tx_data_wr *w = cplhdr(p);
+ cxgb3i_log_error("TID %u got %u WR credits need %u, "
+ "len %u, main body %u, frags %u, "
+ "seq # %u, ACK una %u, ACK nxt %u, "
+ "WR_AVAIL %u, WRs pending %u\n",
+ c3cn->tid, credits, p->csum, p->len,
+ p->len - p->data_len,
+ skb_shinfo(p)->nr_frags,
+ ntohl(w->sndseq), snd_una,
+ ntohl(hdr->snd_nxt), c3cn->wr_avail,
+ count_pending_wrs(c3cn) - credits);
p->csum -= credits;
break;
} else {
}
}
- if (unlikely(before(snd_una, c3cn->snd_una)))
+ check_wr_invariants(c3cn);
+
+ if (unlikely(before(snd_una, c3cn->snd_una))) {
+ cxgb3i_log_error("TID %u, unexpected sequence # %u in WR_ACK "
+ "snd_una %u\n",
+ c3cn->tid, snd_una, c3cn->snd_una);
goto out_free;
+ }
if (c3cn->snd_una != snd_una) {
c3cn->snd_una = snd_una;
dst_confirm(c3cn->dst_cache);
}
- if (skb_queue_len(&c3cn->write_queue) && c3cn_push_tx_frames(c3cn, 0))
+ if (skb_queue_len(&c3cn->write_queue)) {
+ if (c3cn_push_tx_frames(c3cn, 0))
+ cxgb3i_conn_tx_open(c3cn);
+ } else
cxgb3i_conn_tx_open(c3cn);
out_free:
__kfree_skb(skb);
struct dst_entry *dst)
{
BUG_ON(c3cn->cdev != cdev);
- c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs;
+ c3cn->wr_max = c3cn->wr_avail = T3C_DATA(cdev)->max_wrs - 1;
c3cn->wr_unacked = 0;
c3cn->mss_idx = select_mss(c3cn, dst_mtu(dst));
goto out_err;
}
- err = -EPIPE;
if (c3cn->err) {
c3cn_tx_debug("c3cn 0x%p, err %d.\n", c3cn, c3cn->err);
+ err = -EPIPE;
+ goto out_err;
+ }
+
+ if (c3cn->write_seq - c3cn->snd_una >= cxgb3_snd_win) {
+ c3cn_tx_debug("c3cn 0x%p, snd %u - %u > %u.\n",
+ c3cn, c3cn->write_seq, c3cn->snd_una,
+ cxgb3_snd_win);
+ err = -EAGAIN;
goto out_err;
}
* @flag: see C3CB_FLAG_* below
* @ulp_mode: ULP mode/submode of sk_buff
* @seq: tcp sequence number
- * @ddigest: pdu data digest
- * @pdulen: recovered pdu length
- * @wr_data: scratch area for tx wr
*/
+struct cxgb3_skb_rx_cb {
+ __u32 ddigest; /* data digest */
+ __u32 pdulen; /* recovered pdu length */
+};
+
+struct cxgb3_skb_tx_cb {
+ struct sk_buff *wr_next; /* next wr */
+};
+
struct cxgb3_skb_cb {
__u8 flags;
__u8 ulp_mode;
__u32 seq;
- __u32 ddigest;
- __u32 pdulen;
- struct sk_buff *wr_data;
+ union {
+ struct cxgb3_skb_rx_cb rx;
+ struct cxgb3_skb_tx_cb tx;
+ };
};
#define CXGB3_SKB_CB(skb) ((struct cxgb3_skb_cb *)&((skb)->cb[0]))
-
+#define skb_flags(skb) (CXGB3_SKB_CB(skb)->flags)
#define skb_ulp_mode(skb) (CXGB3_SKB_CB(skb)->ulp_mode)
-#define skb_ulp_ddigest(skb) (CXGB3_SKB_CB(skb)->ddigest)
-#define skb_ulp_pdulen(skb) (CXGB3_SKB_CB(skb)->pdulen)
-#define skb_wr_data(skb) (CXGB3_SKB_CB(skb)->wr_data)
+#define skb_tcp_seq(skb) (CXGB3_SKB_CB(skb)->seq)
+#define skb_rx_ddigest(skb) (CXGB3_SKB_CB(skb)->rx.ddigest)
+#define skb_rx_pdulen(skb) (CXGB3_SKB_CB(skb)->rx.pdulen)
+#define skb_tx_wr_next(skb) (CXGB3_SKB_CB(skb)->tx.wr_next)
enum c3cb_flags {
C3CB_FLAG_NEED_HDR = 1 << 0, /* packet needs a TX_DATA_WR header */
/* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */
#define TX_HEADER_LEN \
(sizeof(struct tx_data_wr) + sizeof(struct sge_opaque_hdr))
+#define SKB_TX_HEADROOM SKB_MAX_HEAD(TX_HEADER_LEN)
/*
* get and set private ip for iscsi traffic
#define cxgb3i_tx_debug(fmt...)
#endif
+/* always allocate rooms for AHS */
+#define SKB_TX_PDU_HEADER_LEN \
+ (sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE)
+static unsigned int skb_extra_headroom;
static struct page *pad_page;
/*
void cxgb3i_conn_cleanup_task(struct iscsi_task *task)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = task->dd_data +
+ sizeof(struct iscsi_tcp_task);
/* never reached the xmit task callout */
- if (tcp_task->dd_data)
- kfree_skb(tcp_task->dd_data);
- tcp_task->dd_data = NULL;
+ if (tdata->skb)
+ __kfree_skb(tdata->skb);
+ memset(tdata, 0, sizeof(struct cxgb3i_task_data));
/* MNC - Do we need a check in case this is called but
* cxgb3i_conn_alloc_pdu has never been called on the task */
iscsi_tcp_cleanup_task(task);
}
-/*
- * We do not support ahs yet
- */
+static int sgl_seek_offset(struct scatterlist *sgl, unsigned int sgcnt,
+ unsigned int offset, unsigned int *off,
+ struct scatterlist **sgp)
+{
+ int i;
+ struct scatterlist *sg;
+
+ for_each_sg(sgl, sg, sgcnt, i) {
+ if (offset < sg->length) {
+ *off = offset;
+ *sgp = sg;
+ return 0;
+ }
+ offset -= sg->length;
+ }
+ return -EFAULT;
+}
+
+static int sgl_read_to_frags(struct scatterlist *sg, unsigned int sgoffset,
+ unsigned int dlen, skb_frag_t *frags,
+ int frag_max)
+{
+ unsigned int datalen = dlen;
+ unsigned int sglen = sg->length - sgoffset;
+ struct page *page = sg_page(sg);
+ int i;
+
+ i = 0;
+ do {
+ unsigned int copy;
+
+ if (!sglen) {
+ sg = sg_next(sg);
+ if (!sg) {
+ cxgb3i_log_error("%s, sg NULL, len %u/%u.\n",
+ __func__, datalen, dlen);
+ return -EINVAL;
+ }
+ sgoffset = 0;
+ sglen = sg->length;
+ page = sg_page(sg);
+
+ }
+ copy = min(datalen, sglen);
+ if (i && page == frags[i - 1].page &&
+ sgoffset + sg->offset ==
+ frags[i - 1].page_offset + frags[i - 1].size) {
+ frags[i - 1].size += copy;
+ } else {
+ if (i >= frag_max) {
+ cxgb3i_log_error("%s, too many pages %u, "
+ "dlen %u.\n", __func__,
+ frag_max, dlen);
+ return -EINVAL;
+ }
+
+ frags[i].page = page;
+ frags[i].page_offset = sg->offset + sgoffset;
+ frags[i].size = copy;
+ i++;
+ }
+ datalen -= copy;
+ sgoffset += copy;
+ sglen -= copy;
+ } while (datalen);
+
+ return i;
+}
+
int cxgb3i_conn_alloc_pdu(struct iscsi_task *task, u8 opcode)
{
+ struct iscsi_conn *conn = task->conn;
struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb;
+ struct cxgb3i_task_data *tdata = task->dd_data + sizeof(*tcp_task);
+ struct scsi_cmnd *sc = task->sc;
+ int headroom = SKB_TX_PDU_HEADER_LEN;
+ tcp_task->dd_data = tdata;
task->hdr = NULL;
- /* always allocate rooms for AHS */
- skb = alloc_skb(sizeof(struct iscsi_hdr) + ISCSI_MAX_AHS_SIZE +
- TX_HEADER_LEN, GFP_ATOMIC);
- if (!skb)
+
+ /* write command, need to send data pdus */
+ if (skb_extra_headroom && (opcode == ISCSI_OP_SCSI_DATA_OUT ||
+ (opcode == ISCSI_OP_SCSI_CMD &&
+ (scsi_bidi_cmnd(sc) || sc->sc_data_direction == DMA_TO_DEVICE))))
+ headroom += min(skb_extra_headroom, conn->max_xmit_dlength);
+
+ tdata->skb = alloc_skb(TX_HEADER_LEN + headroom, GFP_ATOMIC);
+ if (!tdata->skb)
return -ENOMEM;
+ skb_reserve(tdata->skb, TX_HEADER_LEN);
cxgb3i_tx_debug("task 0x%p, opcode 0x%x, skb 0x%p.\n",
- task, opcode, skb);
+ task, opcode, tdata->skb);
- tcp_task->dd_data = skb;
- skb_reserve(skb, TX_HEADER_LEN);
- task->hdr = (struct iscsi_hdr *)skb->data;
- task->hdr_max = sizeof(struct iscsi_hdr);
+ task->hdr = (struct iscsi_hdr *)tdata->skb->data;
+ task->hdr_max = SKB_TX_PDU_HEADER_LEN;
/* data_out uses scsi_cmd's itt */
if (opcode != ISCSI_OP_SCSI_DATA_OUT)
int cxgb3i_conn_init_pdu(struct iscsi_task *task, unsigned int offset,
unsigned int count)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb = tcp_task->dd_data;
struct iscsi_conn *conn = task->conn;
- struct page *pg;
+ struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = tcp_task->dd_data;
+ struct sk_buff *skb = tdata->skb;
unsigned int datalen = count;
int i, padlen = iscsi_padding(count);
- skb_frag_t *frag;
+ struct page *pg;
cxgb3i_tx_debug("task 0x%p,0x%p, offset %u, count %u, skb 0x%p.\n",
task, task->sc, offset, count, skb);
return 0;
if (task->sc) {
- struct scatterlist *sg;
- struct scsi_data_buffer *sdb;
- unsigned int sgoffset = offset;
- struct page *sgpg;
- unsigned int sglen;
-
- sdb = scsi_out(task->sc);
- sg = sdb->table.sgl;
-
- for_each_sg(sdb->table.sgl, sg, sdb->table.nents, i) {
- cxgb3i_tx_debug("sg %d, page 0x%p, len %u offset %u\n",
- i, sg_page(sg), sg->length, sg->offset);
-
- if (sgoffset < sg->length)
- break;
- sgoffset -= sg->length;
+ struct scsi_data_buffer *sdb = scsi_out(task->sc);
+ struct scatterlist *sg = NULL;
+ int err;
+
+ tdata->offset = offset;
+ tdata->count = count;
+ err = sgl_seek_offset(sdb->table.sgl, sdb->table.nents,
+ tdata->offset, &tdata->sgoffset, &sg);
+ if (err < 0) {
+ cxgb3i_log_warn("tpdu, sgl %u, bad offset %u/%u.\n",
+ sdb->table.nents, tdata->offset,
+ sdb->length);
+ return err;
}
- sgpg = sg_page(sg);
- sglen = sg->length - sgoffset;
-
- do {
- int j = skb_shinfo(skb)->nr_frags;
- unsigned int copy;
-
- if (!sglen) {
- sg = sg_next(sg);
- sgpg = sg_page(sg);
- sgoffset = 0;
- sglen = sg->length;
- ++i;
+ err = sgl_read_to_frags(sg, tdata->sgoffset, tdata->count,
+ tdata->frags, MAX_PDU_FRAGS);
+ if (err < 0) {
+ cxgb3i_log_warn("tpdu, sgl %u, bad offset %u + %u.\n",
+ sdb->table.nents, tdata->offset,
+ tdata->count);
+ return err;
+ }
+ tdata->nr_frags = err;
+
+ if (tdata->nr_frags > MAX_SKB_FRAGS ||
+ (padlen && tdata->nr_frags == MAX_SKB_FRAGS)) {
+ char *dst = skb->data + task->hdr_len;
+ skb_frag_t *frag = tdata->frags;
+
+ /* data fits in the skb's headroom */
+ for (i = 0; i < tdata->nr_frags; i++, frag++) {
+ char *src = kmap_atomic(frag->page,
+ KM_SOFTIRQ0);
+
+ memcpy(dst, src+frag->page_offset, frag->size);
+ dst += frag->size;
+ kunmap_atomic(src, KM_SOFTIRQ0);
}
- copy = min(sglen, datalen);
- if (j && skb_can_coalesce(skb, j, sgpg,
- sg->offset + sgoffset)) {
- skb_shinfo(skb)->frags[j - 1].size += copy;
- } else {
- get_page(sgpg);
- skb_fill_page_desc(skb, j, sgpg,
- sg->offset + sgoffset, copy);
+ if (padlen) {
+ memset(dst, 0, padlen);
+ padlen = 0;
}
- sgoffset += copy;
- sglen -= copy;
- datalen -= copy;
- } while (datalen);
+ skb_put(skb, count + padlen);
+ } else {
+ /* data fit into frag_list */
+ for (i = 0; i < tdata->nr_frags; i++)
+ get_page(tdata->frags[i].page);
+
+ memcpy(skb_shinfo(skb)->frags, tdata->frags,
+ sizeof(skb_frag_t) * tdata->nr_frags);
+ skb_shinfo(skb)->nr_frags = tdata->nr_frags;
+ skb->len += count;
+ skb->data_len += count;
+ skb->truesize += count;
+ }
+
} else {
pg = virt_to_page(task->data);
- while (datalen) {
- i = skb_shinfo(skb)->nr_frags;
- frag = &skb_shinfo(skb)->frags[i];
-
- get_page(pg);
- frag->page = pg;
- frag->page_offset = 0;
- frag->size = min((unsigned int)PAGE_SIZE, datalen);
-
- skb_shinfo(skb)->nr_frags++;
- datalen -= frag->size;
- pg++;
- }
+ get_page(pg);
+ skb_fill_page_desc(skb, 0, pg, offset_in_page(task->data),
+ count);
+ skb->len += count;
+ skb->data_len += count;
+ skb->truesize += count;
}
if (padlen) {
i = skb_shinfo(skb)->nr_frags;
- frag = &skb_shinfo(skb)->frags[i];
- frag->page = pad_page;
- frag->page_offset = 0;
- frag->size = padlen;
- skb_shinfo(skb)->nr_frags++;
+ get_page(pad_page);
+ skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags, pad_page, 0,
+ padlen);
+
+ skb->data_len += padlen;
+ skb->truesize += padlen;
+ skb->len += padlen;
}
- datalen = count + padlen;
- skb->data_len += datalen;
- skb->truesize += datalen;
- skb->len += datalen;
return 0;
}
int cxgb3i_conn_xmit_pdu(struct iscsi_task *task)
{
- struct iscsi_tcp_task *tcp_task = task->dd_data;
- struct sk_buff *skb = tcp_task->dd_data;
struct iscsi_tcp_conn *tcp_conn = task->conn->dd_data;
struct cxgb3i_conn *cconn = tcp_conn->dd_data;
+ struct iscsi_tcp_task *tcp_task = task->dd_data;
+ struct cxgb3i_task_data *tdata = tcp_task->dd_data;
+ struct sk_buff *skb = tdata->skb;
unsigned int datalen;
int err;
return 0;
datalen = skb->data_len;
- tcp_task->dd_data = NULL;
+ tdata->skb = NULL;
err = cxgb3i_c3cn_send_pdus(cconn->cep->c3cn, skb);
- cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n",
- task, skb, skb->len, skb->data_len, err);
if (err > 0) {
int pdulen = err;
+ cxgb3i_tx_debug("task 0x%p, skb 0x%p, len %u/%u, rv %d.\n",
+ task, skb, skb->len, skb->data_len, err);
+
if (task->conn->hdrdgst_en)
pdulen += ISCSI_DIGEST_SIZE;
if (datalen && task->conn->datadgst_en)
return err;
}
/* reset skb to send when we are called again */
- tcp_task->dd_data = skb;
+ tdata->skb = skb;
return -EAGAIN;
}
int cxgb3i_pdu_init(void)
{
+ if (SKB_TX_HEADROOM > (512 * MAX_SKB_FRAGS))
+ skb_extra_headroom = SKB_TX_HEADROOM;
pad_page = alloc_page(GFP_KERNEL);
if (!pad_page)
return -ENOMEM;
skb = skb_peek(&c3cn->receive_queue);
while (!err && skb) {
__skb_unlink(skb, &c3cn->receive_queue);
- read += skb_ulp_pdulen(skb);
+ read += skb_rx_pdulen(skb);
+ cxgb3i_rx_debug("conn 0x%p, cn 0x%p, rx skb 0x%p, pdulen %u.\n",
+ conn, c3cn, skb, skb_rx_pdulen(skb));
err = cxgb3i_conn_read_pdu_skb(conn, skb);
__kfree_skb(skb);
skb = skb_peek(&c3cn->receive_queue);
cxgb3i_c3cn_rx_credits(c3cn, read);
}
conn->rxdata_octets += read;
+
+ if (err) {
+ cxgb3i_log_info("conn 0x%p rx failed err %d.\n", conn, err);
+ iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
+ }
}
void cxgb3i_conn_tx_open(struct s3_conn *c3cn)
#define ULP2_FLAG_DCRC_ERROR 0x20
#define ULP2_FLAG_PAD_ERROR 0x40
-void cxgb3i_conn_closing(struct s3_conn *);
+void cxgb3i_conn_closing(struct s3_conn *c3cn);
void cxgb3i_conn_pdu_ready(struct s3_conn *c3cn);
void cxgb3i_conn_tx_open(struct s3_conn *c3cn);
#endif
{ PCI_VDEVICE(TTI, 0x3530), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x3560), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4322), (kernel_ulong_t)&hptiop_itl_ops },
+ { PCI_VDEVICE(TTI, 0x4321), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4210), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4211), (kernel_ulong_t)&hptiop_itl_ops },
{ PCI_VDEVICE(TTI, 0x4310), (kernel_ulong_t)&hptiop_itl_ops },
action = ACTION_FAIL;
break;
case ABORTED_COMMAND:
+ action = ACTION_FAIL;
if (sshdr.asc == 0x10) { /* DIF */
description = "Target Data Integrity Failure";
- action = ACTION_FAIL;
error = -EILSEQ;
- } else
- action = ACTION_RETRY;
+ }
break;
case NOT_READY:
/* If the device is in the process of becoming
static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *);
static void sd_print_result(struct scsi_disk *, int);
+static DEFINE_SPINLOCK(sd_index_lock);
static DEFINE_IDA(sd_index_ida);
/* This semaphore is used to mediate the 0->1 reference get in the
if (!ida_pre_get(&sd_index_ida, GFP_KERNEL))
goto out_put;
+ spin_lock(&sd_index_lock);
error = ida_get_new(&sd_index_ida, &index);
+ spin_unlock(&sd_index_lock);
} while (error == -EAGAIN);
if (error)
return 0;
out_free_index:
+ spin_lock(&sd_index_lock);
ida_remove(&sd_index_ida, index);
+ spin_unlock(&sd_index_lock);
out_put:
put_disk(gd);
out_free:
struct scsi_disk *sdkp = to_scsi_disk(dev);
struct gendisk *disk = sdkp->disk;
+ spin_lock(&sd_index_lock);
ida_remove(&sd_index_ida, sdkp->index);
+ spin_unlock(&sd_index_lock);
disk->private_data = NULL;
put_disk(disk);
# define SCSPTR3 0xffed0024 /* 16 bit SCIF */
# define SCSPTR4 0xffee0024 /* 16 bit SCIF */
# define SCSPTR5 0xffef0024 /* 16 bit SCIF */
-# define SCIF_OPER 0x0001 /* Overrun error bit */
+# define SCIF_ORER 0x0001 /* Overrun error bit */
# define SCSCR_INIT(port) 0x3a /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
#elif defined(CONFIG_CPU_SUBTYPE_SH7201) || \
defined(CONFIG_CPU_SUBTYPE_SH7203) || \
if (scan_timer.function != NULL)
del_timer(&scan_timer);
- if (keypad_enabled)
- misc_deregister(&keypad_dev);
+ if (pprt != NULL) {
+ if (keypad_enabled)
+ misc_deregister(&keypad_dev);
+
+ if (lcd_enabled) {
+ panel_lcd_print("\x0cLCD driver " PANEL_VERSION
+ "\nunloaded.\x1b[Lc\x1b[Lb\x1b[L-");
+ misc_deregister(&lcd_dev);
+ }
- if (lcd_enabled) {
- panel_lcd_print("\x0cLCD driver " PANEL_VERSION
- "\nunloaded.\x1b[Lc\x1b[Lb\x1b[L-");
- misc_deregister(&lcd_dev);
+ /* TODO: free all input signals */
+ parport_release(pprt);
+ parport_unregister_device(pprt);
}
-
- /* TODO: free all input signals */
-
- parport_release(pprt);
- parport_unregister_device(pprt);
parport_unregister_driver(&panel_driver);
}
config RTL8187SE
tristate "RealTek RTL8187SE Wireless LAN NIC driver"
depends on PCI
+ depends on WIRELESS_EXT && COMPAT_NET_DEV_OPS
default N
---help---
void ieee80211_crypto_deinit(void)
{
struct list_head *ptr, *n;
+ struct ieee80211_crypto_alg *alg = NULL;
if (hcrypt == NULL)
return;
- for (ptr = hcrypt->algs.next, n = ptr->next; ptr != &hcrypt->algs;
- ptr = n, n = ptr->next) {
- struct ieee80211_crypto_alg *alg =
- (struct ieee80211_crypto_alg *) ptr;
- list_del(ptr);
- printk(KERN_DEBUG "ieee80211_crypt: unregistered algorithm "
- "'%s' (deinit)\n", alg->ops->name);
- kfree(alg);
+ list_for_each_safe(ptr, n, &hcrypt->algs) {
+ alg = list_entry(ptr, struct ieee80211_crypto_alg, list);
+ if (alg) {
+ list_del(ptr);
+ printk(KERN_DEBUG
+ "ieee80211_crypt: unregistered algorithm '%s' (deinit)\n",
+ alg->ops->name);
+ kfree(alg);
+ }
}
-
kfree(hcrypt);
}
{
pci_unregister_driver (&rtl8180_pci_driver);
rtl8180_proc_module_remove();
- ieee80211_crypto_deinit();
ieee80211_crypto_tkip_exit();
ieee80211_crypto_ccmp_exit();
ieee80211_crypto_wep_exit();
+ ieee80211_crypto_deinit();
DMESG("Exiting");
}
struct usb_device *udev = interface_to_usbdev(intf);
struct wbsoft_priv *priv;
struct ieee80211_hw *dev;
- int err;
+ int nr, err;
usb_get_dev(udev);
// 20060630.2 Check the device if it already be opened
- err = usb_control_msg(udev, usb_rcvctrlpipe( udev, 0 ),
- 0x01, USB_TYPE_VENDOR|USB_RECIP_DEVICE|USB_DIR_IN,
- 0x0, 0x400, <mp, 4, HZ*100 );
- if (err)
+ nr = usb_control_msg(udev, usb_rcvctrlpipe( udev, 0 ),
+ 0x01, USB_TYPE_VENDOR|USB_RECIP_DEVICE|USB_DIR_IN,
+ 0x0, 0x400, <mp, 4, HZ*100 );
+ if (nr < 0) {
+ err = nr;
goto error;
+ }
ltmp = cpu_to_le32(ltmp);
if (ltmp) { // Is already initialized?
}
dev = ieee80211_alloc_hw(sizeof(*priv), &wbsoft_ops);
- if (!dev)
+ if (!dev) {
+ err = -ENOMEM;
goto error;
+ }
priv = dev->priv;
}
dev->extra_tx_headroom = 12; /* FIXME */
- dev->flags = 0;
+ dev->flags = IEEE80211_HW_SIGNAL_UNSPEC;
+ dev->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION);
dev->channel_change_time = 1000;
+ dev->max_signal = 100;
dev->queues = 1;
dev->wiphy->bands[IEEE80211_BAND_2GHZ] = &wbsoft_band_2GHz;
{ USB_DEVICE(0x0572, 0x1324), /* Conexant USB MODEM RD02-D400 */
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
},
+ { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */
+ },
+ { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */
+ .driver_info = NO_UNION_NORMAL, /* union descriptor misplaced on
+ data interface instead of
+ communications interface.
+ Maybe we should define a new
+ quirk for this. */
+ },
/* control interfaces with various AT-command sets */
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
if (result <= 0 && result != -ETIMEDOUT)
continue;
if (result > 1 && ((u8 *)buf)[1] != type) {
- result = -EPROTO;
+ result = -ENODATA;
continue;
}
break;
USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
(USB_DT_STRING << 8) + index, langid, buf, size,
USB_CTRL_GET_TIMEOUT);
- if (!(result == 0 || result == -EPIPE))
- break;
+ if (result == 0 || result == -EPIPE)
+ continue;
+ if (result > 1 && ((u8 *) buf)[1] != USB_DT_STRING) {
+ result = -ENODATA;
+ continue;
+ }
+ break;
}
return result;
}
boolean "OMAP USB Device Controller"
depends on ARCH_OMAP
select ISP1301_OMAP if MACH_OMAP_H2 || MACH_OMAP_H3 || MACH_OMAP_H4_OTG
+ select USB_OTG_UTILS if ARCH_OMAP
help
Many Texas Instruments OMAP processors have flexible full
speed USB device controllers, with support for up to 30
f->hs_descriptors = usb_copy_descriptors(hs_function);
obex->hs.obex_in = usb_find_endpoint(hs_function,
- f->descriptors, &obex_hs_ep_in_desc);
+ f->hs_descriptors, &obex_hs_ep_in_desc);
obex->hs.obex_out = usb_find_endpoint(hs_function,
- f->descriptors, &obex_hs_ep_out_desc);
+ f->hs_descriptors, &obex_hs_ep_out_desc);
}
/* Avoid letting this gadget enumerate until the userspace
mod_data.protocol_type = USB_SC_SCSI;
mod_data.protocol_name = "Transparent SCSI";
- if (gadget_is_sh(fsg->gadget))
+ /* Some peripheral controllers are known not to be able to
+ * halt bulk endpoints correctly. If one of them is present,
+ * disable stalls.
+ */
+ if (gadget_is_sh(fsg->gadget) || gadget_is_at91(fsg->gadget))
mod_data.can_stall = 0;
if (mod_data.release == 0xffff) { // Parameter wasn't set
}
if (zlt)
tmp |= EP_QUEUE_HEAD_ZLT_SEL;
+
p_QH->max_pkt_length = cpu_to_le32(tmp);
+ p_QH->next_dtd_ptr = 1;
+ p_QH->size_ioc_int_sts = 0;
return;
}
* periodic_size can shrink by USBCMD update if hcc_params allows.
*/
ehci->periodic_size = DEFAULT_I_TDPS;
+ INIT_LIST_HEAD(&ehci->cached_itd_list);
if ((retval = ehci_mem_init(ehci, GFP_KERNEL)) < 0)
return retval;
ehci->reclaim = NULL;
ehci->next_uframe = -1;
+ ehci->clock_frame = -1;
/*
* dedicate a qh for the async ring head, since we couldn't unlink
static void ehci_mem_cleanup (struct ehci_hcd *ehci)
{
+ free_cached_itd_list(ehci);
if (ehci->async)
qh_put (ehci->async);
ehci->async = NULL;
is_in = (stream->bEndpointAddress & USB_DIR_IN) ? 0x10 : 0;
stream->bEndpointAddress &= 0x0f;
- stream->ep->hcpriv = NULL;
+ if (stream->ep)
+ stream->ep->hcpriv = NULL;
if (stream->rescheduled) {
ehci_info (ehci, "ep%d%s-iso rescheduled "
(stream->bEndpointAddress & USB_DIR_IN) ? "in" : "out");
}
iso_stream_put (ehci, stream);
- /* OK to recycle this ITD now that its completion callback ran. */
+
done:
usb_put_urb(urb);
itd->urb = NULL;
- itd->stream = NULL;
- list_move(&itd->itd_list, &stream->free_list);
- iso_stream_put(ehci, stream);
-
+ if (ehci->clock_frame != itd->frame || itd->index[7] != -1) {
+ /* OK to recycle this ITD now. */
+ itd->stream = NULL;
+ list_move(&itd->itd_list, &stream->free_list);
+ iso_stream_put(ehci, stream);
+ } else {
+ /* HW might remember this ITD, so we can't recycle it yet.
+ * Move it to a safe place until a new frame starts.
+ */
+ list_move(&itd->itd_list, &ehci->cached_itd_list);
+ if (stream->refcount == 2) {
+ /* If iso_stream_put() were called here, stream
+ * would be freed. Instead, just prevent reuse.
+ */
+ stream->ep->hcpriv = NULL;
+ stream->ep = NULL;
+ }
+ }
return retval;
}
/*-------------------------------------------------------------------------*/
+static void free_cached_itd_list(struct ehci_hcd *ehci)
+{
+ struct ehci_itd *itd, *n;
+
+ list_for_each_entry_safe(itd, n, &ehci->cached_itd_list, itd_list) {
+ struct ehci_iso_stream *stream = itd->stream;
+ itd->stream = NULL;
+ list_move(&itd->itd_list, &stream->free_list);
+ iso_stream_put(ehci, stream);
+ }
+}
+
+/*-------------------------------------------------------------------------*/
+
static void
scan_periodic (struct ehci_hcd *ehci)
{
* Touches as few pages as possible: cache-friendly.
*/
now_uframe = ehci->next_uframe;
- if (HC_IS_RUNNING (ehci_to_hcd(ehci)->state))
+ if (HC_IS_RUNNING(ehci_to_hcd(ehci)->state)) {
clock = ehci_readl(ehci, &ehci->regs->frame_index);
- else
+ clock_frame = (clock >> 3) % ehci->periodic_size;
+ } else {
clock = now_uframe + mod - 1;
+ clock_frame = -1;
+ }
+ if (ehci->clock_frame != clock_frame) {
+ free_cached_itd_list(ehci);
+ ehci->clock_frame = clock_frame;
+ }
clock %= mod;
clock_frame = clock >> 3;
/* rescan the rest of this frame, then ... */
clock = now;
clock_frame = clock >> 3;
+ if (ehci->clock_frame != clock_frame) {
+ free_cached_itd_list(ehci);
+ ehci->clock_frame = clock_frame;
+ }
} else {
now_uframe++;
now_uframe %= mod;
int next_uframe; /* scan periodic, start here */
unsigned periodic_sched; /* periodic activity count */
+ /* list of itds completed while clock_frame was still active */
+ struct list_head cached_itd_list;
+ unsigned clock_frame;
+
/* per root hub port */
unsigned long reset_done [EHCI_MAX_ROOT_PORTS];
}
}
+static void free_cached_itd_list(struct ehci_hcd *ehci);
+
/*-------------------------------------------------------------------------*/
#include <linux/usb/ehci_def.h>
u32 revision;
musb->mregs += DAVINCI_BASE_OFFSET;
-#if 0
- /* REVISIT there's something odd about clocking, this
- * didn't appear do the job ...
- */
- musb->clock = clk_get(pDevice, "usb");
- if (IS_ERR(musb->clock))
- return PTR_ERR(musb->clock);
- status = clk_enable(musb->clock);
- if (status < 0)
- return -ENODEV;
-#endif
+ clk_enable(musb->clock);
/* returns zero if e.g. not clocked */
revision = musb_readl(tibase, DAVINCI_USB_VERSION_REG);
}
phy_off();
+
+ clk_disable(musb->clock);
+
return 0;
}
unsigned musb_debug;
-module_param(musb_debug, uint, S_IRUGO | S_IWUSR);
+module_param_named(debug, musb_debug, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(debug, "Debug message level. Default = 0");
#define DRIVER_AUTHOR "Mentor Graphics, Texas Instruments, Nokia"
#ifdef CONFIG_USB_MUSB_HDRC_HCD
case OTG_STATE_A_HOST:
case OTG_STATE_A_SUSPEND:
+ usb_hcd_resume_root_hub(musb_to_hcd(musb));
musb_root_disconnect(musb);
if (musb->a_wait_bcon != 0)
musb_platform_try_idle(musb, jiffies
#ifdef CONFIG_SYSFS
device_remove_file(musb->controller, &dev_attr_mode);
device_remove_file(musb->controller, &dev_attr_vbus);
-#ifdef CONFIG_USB_MUSB_OTG
+#ifdef CONFIG_USB_GADGET_MUSB_HDRC
device_remove_file(musb->controller, &dev_attr_srp);
#endif
#endif
#ifdef CONFIG_SYSFS
device_remove_file(musb->controller, &dev_attr_mode);
device_remove_file(musb->controller, &dev_attr_vbus);
-#ifdef CONFIG_USB_MUSB_OTG
+#ifdef CONFIG_USB_GADGET_MUSB_HDRC
device_remove_file(musb->controller, &dev_attr_srp);
#endif
#endif
return platform_driver_probe(&musb_driver, musb_probe);
}
-/* make us init after usbcore and before usb
- * gadget and host-side drivers start to register
+/* make us init after usbcore and i2c (transceivers, regulators, etc)
+ * and before usb gadget and host-side drivers start to register
*/
-subsys_initcall(musb_init);
+fs_initcall(musb_init);
static void __exit musb_cleanup(void)
{
struct usb_request *request = &req->request;
struct musb_ep *musb_ep = &musb->endpoints[epnum].ep_out;
void __iomem *epio = musb->endpoints[epnum].regs;
- u16 fifo_count = 0;
+ unsigned fifo_count = 0;
u16 len = musb_ep->packet_sz;
csr = musb_readw(epio, MUSB_RXCSR);
len, fifo_count,
musb_ep->packet_sz);
- fifo_count = min(len, fifo_count);
+ fifo_count = min_t(unsigned, len, fifo_count);
#ifdef CONFIG_USB_TUSB_OMAP_DMA
if (tusb_dma_omap() && musb_ep->dma) {
static struct musb_qh *
musb_giveback(struct musb_qh *qh, struct urb *urb, int status)
{
- int is_in;
struct musb_hw_ep *ep = qh->hw_ep;
struct musb *musb = ep->musb;
+ int is_in = usb_pipein(urb->pipe);
int ready = qh->is_ready;
- if (ep->is_shared_fifo)
- is_in = 1;
- else
- is_in = usb_pipein(urb->pipe);
-
/* save toggle eagerly, for paranoia */
switch (qh->type) {
case USB_ENDPOINT_XFER_BULK:
else
qh = musb_giveback(qh, urb, urb->status);
- if (qh && qh->is_ready && !list_empty(&qh->hep->urb_list)) {
+ if (qh != NULL && qh->is_ready) {
DBG(4, "... next ep%d %cX urb %p\n",
hw_ep->epnum, is_in ? 'R' : 'T',
next_urb(qh));
switch (musb->ep0_stage) {
case MUSB_EP0_IN:
fifo_dest = urb->transfer_buffer + urb->actual_length;
- fifo_count = min(len, ((u16) (urb->transfer_buffer_length
- - urb->actual_length)));
+ fifo_count = min_t(size_t, len, urb->transfer_buffer_length -
+ urb->actual_length);
if (fifo_count < len)
urb->status = -EOVERFLOW;
}
/* FALLTHROUGH */
case MUSB_EP0_OUT:
- fifo_count = min(qh->maxpacket, ((u16)
- (urb->transfer_buffer_length
- - urb->actual_length)));
-
+ fifo_count = min_t(size_t, qh->maxpacket,
+ urb->transfer_buffer_length -
+ urb->actual_length);
if (fifo_count) {
fifo_dest = (u8 *) (urb->transfer_buffer
+ urb->actual_length);
struct urb *urb;
struct musb_hw_ep *hw_ep = musb->endpoints + epnum;
void __iomem *epio = hw_ep->regs;
- struct musb_qh *qh = hw_ep->out_qh;
+ struct musb_qh *qh = hw_ep->is_shared_fifo ? hw_ep->in_qh
+ : hw_ep->out_qh;
u32 status = 0;
void __iomem *mbase = musb->mregs;
struct dma_channel *dma;
* packets before updating TXCSR ... other docs disagree ...
*/
/* PIO: start next packet in this URB */
- wLength = min(qh->maxpacket, (u16) wLength);
+ if (wLength > qh->maxpacket)
+ wLength = qh->maxpacket;
musb_write_fifo(hw_ep, wLength, buf);
qh->segsize = wLength;
}
qh->type_reg = type_reg;
- /* precompute rxinterval/txinterval register */
- interval = min((u8)16, epd->bInterval); /* log encoding */
+ /* Precompute RXINTERVAL/TXINTERVAL register */
switch (qh->type) {
case USB_ENDPOINT_XFER_INT:
- /* fullspeed uses linear encoding */
- if (USB_SPEED_FULL == urb->dev->speed) {
- interval = epd->bInterval;
- if (!interval)
- interval = 1;
+ /*
+ * Full/low speeds use the linear encoding,
+ * high speed uses the logarithmic encoding.
+ */
+ if (urb->dev->speed <= USB_SPEED_FULL) {
+ interval = max_t(u8, epd->bInterval, 1);
+ break;
}
/* FALLTHROUGH */
case USB_ENDPOINT_XFER_ISOC:
- /* iso always uses log encoding */
+ /* ISO always uses logarithmic encoding */
+ interval = min_t(u8, epd->bInterval, 16);
break;
default:
/* REVISIT we actually want to use NAK limits, hinting to the
goto done;
/* Any URB not actively programmed into endpoint hardware can be
- * immediately given back. Such an URB must be at the head of its
+ * immediately given back; that's any URB not at the head of an
* endpoint queue, unless someday we get real DMA queues. And even
- * then, it might not be known to the hardware...
+ * if it's at the head, it might not be known to the hardware...
*
* Otherwise abort current transfer, pending dma, etc.; urb->status
* has already been updated. This is a synchronous abort; it'd be
qh->is_ready = 0;
__musb_giveback(musb, urb, 0);
qh->is_ready = ready;
+
+ /* If nothing else (usually musb_giveback) is using it
+ * and its URB list has emptied, recycle this qh.
+ */
+ if (ready && list_empty(&qh->hep->urb_list)) {
+ qh->hep->hcpriv = NULL;
+ list_del(&qh->ring);
+ kfree(qh);
+ }
} else
ret = musb_cleanup_urb(urb, qh, urb->pipe & USB_DIR_IN);
done:
unsigned long flags;
struct musb *musb = hcd_to_musb(hcd);
u8 is_in = epnum & USB_DIR_IN;
- struct musb_qh *qh = hep->hcpriv;
- struct urb *urb, *tmp;
+ struct musb_qh *qh;
+ struct urb *urb;
struct list_head *sched;
- if (!qh)
- return;
-
spin_lock_irqsave(&musb->lock, flags);
+ qh = hep->hcpriv;
+ if (qh == NULL)
+ goto exit;
+
switch (qh->type) {
case USB_ENDPOINT_XFER_CONTROL:
sched = &musb->control;
/* cleanup */
musb_cleanup_urb(urb, qh, urb->pipe & USB_DIR_IN);
- } else
- urb = NULL;
- /* then just nuke all the others */
- list_for_each_entry_safe_from(urb, tmp, &hep->urb_list, urb_list)
- musb_giveback(qh, urb, -ESHUTDOWN);
+ /* Then nuke all the others ... and advance the
+ * queue on hw_ep (e.g. bulk ring) when we're done.
+ */
+ while (!list_empty(&hep->urb_list)) {
+ urb = next_urb(qh);
+ urb->status = -ESHUTDOWN;
+ musb_advance_schedule(musb, urb, qh->hw_ep, is_in);
+ }
+ } else {
+ /* Just empty the queue; the hardware is busy with
+ * other transfers, and since !qh->is_ready nothing
+ * will activate any of these as it advances.
+ */
+ while (!list_empty(&hep->urb_list))
+ __musb_giveback(musb, next_urb(qh), -ESHUTDOWN);
+ hep->hcpriv = NULL;
+ list_del(&qh->ring);
+ kfree(qh);
+ }
+exit:
spin_unlock_irqrestore(&musb->lock, flags);
}
/* Ericsson products */
#define ERICSSON_VENDOR_ID 0x0bdb
-#define ERICSSON_PRODUCT_F3507G 0x1900
+#define ERICSSON_PRODUCT_F3507G_1 0x1900
+#define ERICSSON_PRODUCT_F3507G_2 0x1902
+
+#define BENQ_VENDOR_ID 0x04a5
+#define BENQ_PRODUCT_H10 0x4068
static struct usb_device_id option_ids[] = {
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_MF626) },
{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_MF628) },
{ USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_CDMA_TECH) },
- { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G) },
+ { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G_1) },
+ { USB_DEVICE(ERICSSON_VENDOR_ID, ERICSSON_PRODUCT_F3507G_2) },
+ { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) },
+ { USB_DEVICE(0x1da5, 0x4515) }, /* BenQ H20 */
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
"Genesys Logic",
"USB to IDE Optical",
US_SC_DEVICE, US_PR_DEVICE, NULL,
- US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 ),
+ US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 | US_FL_IGNORE_RESIDUE ),
UNUSUAL_DEV( 0x05e3, 0x0702, 0x0000, 0xffff,
"Genesys Logic",
"USB to IDE Disk",
US_SC_DEVICE, US_PR_DEVICE, NULL,
- US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 ),
+ US_FL_GO_SLOW | US_FL_MAX_SECTORS_64 | US_FL_IGNORE_RESIDUE ),
/* Reported by Ben Efros <ben@pc-doctor.com> */
UNUSUAL_DEV( 0x05e3, 0x0723, 0x9451, 0x9451,
static struct platform_driver pxafb_driver = {
.probe = pxafb_probe,
- .remove = pxafb_remove,
+ .remove = __devexit_p(pxafb_remove),
.suspend = pxafb_suspend,
.resume = pxafb_resume,
.driver = {
Say Y here if you want to connect 1-wire
simple 64bit memory rom(ds2401/ds2411/ds1990*) to your wire.
+config W1_SLAVE_DS2431
+ tristate "1kb EEPROM family support (DS2431)"
+ help
+ Say Y here if you want to use a 1-wire
+ 1kb EEPROM family device (DS2431)
+
config W1_SLAVE_DS2433
tristate "4kb EEPROM family support (DS2433)"
help
obj-$(CONFIG_W1_SLAVE_THERM) += w1_therm.o
obj-$(CONFIG_W1_SLAVE_SMEM) += w1_smem.o
+obj-$(CONFIG_W1_SLAVE_DS2431) += w1_ds2431.o
obj-$(CONFIG_W1_SLAVE_DS2433) += w1_ds2433.o
obj-$(CONFIG_W1_SLAVE_DS2760) += w1_ds2760.o
obj-$(CONFIG_W1_SLAVE_BQ27000) += w1_bq27000.o
*/
static int w1_f23_write(struct w1_slave *sl, int addr, int len, const u8 *data)
{
+#ifdef CONFIG_W1_SLAVE_DS2433_CRC
+ struct w1_f23_data *f23 = sl->family_data;
+#endif
u8 wrbuf[4];
u8 rdbuf[W1_PAGE_SIZE + 3];
u8 es = (addr + len - 1) & 0x1f;
/* Reset the bus to wake up the EEPROM (this may not be needed) */
w1_reset_bus(sl->master);
-
+#ifdef CONFIG_W1_SLAVE_DS2433_CRC
+ f23->validcrc &= ~(1 << (addr >> W1_PAGE_BITS));
+#endif
return 0;
}
# Do not add any filesystems before this line
obj-$(CONFIG_REISERFS_FS) += reiserfs/
obj-$(CONFIG_EXT3_FS) += ext3/ # Before ext2 so root fs can be ext3
-obj-$(CONFIG_EXT4_FS) += ext4/ # Before ext2 so root fs can be ext4
+obj-$(CONFIG_EXT2_FS) += ext2/
+# We place ext4 after ext2 so plain ext2 root fs's are mounted using ext2
+# unless explicitly requested by rootfstype
+obj-$(CONFIG_EXT4_FS) += ext4/
obj-$(CONFIG_JBD) += jbd/
obj-$(CONFIG_JBD2) += jbd2/
-obj-$(CONFIG_EXT2_FS) += ext2/
obj-$(CONFIG_CRAMFS) += cramfs/
obj-$(CONFIG_SQUASHFS) += squashfs/
obj-y += ramfs/
struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
{
struct bio *bio = NULL;
- void *p;
+ void *uninitialized_var(p);
if (bs) {
p = mempool_alloc(bs->bio_pool, gfp_mask);
*/
struct list_head delalloc_inodes;
+ /* the space_info for where this inode's data allocations are done */
+ struct btrfs_space_info *space_info;
+
/* full 64 bit generation number, struct vfs_inode doesn't have a big
* enough field for this.
*/
*/
u64 delalloc_bytes;
+ /* total number of bytes that may be used for this inode for
+ * delalloc
+ */
+ u64 reserved_bytes;
+
/*
* the size of the file stored in the metadata on disk. data=ordered
* means the in-memory i_size might be larger than the size on disk
struct btrfs_space_info {
u64 flags;
- u64 total_bytes;
- u64 bytes_used;
- u64 bytes_pinned;
- u64 bytes_reserved;
- u64 bytes_readonly;
- int full;
- int force_alloc;
+
+ u64 total_bytes; /* total bytes in the space */
+ u64 bytes_used; /* total bytes used on disk */
+ u64 bytes_pinned; /* total bytes pinned, will be freed when the
+ transaction finishes */
+ u64 bytes_reserved; /* total bytes the allocator has reserved for
+ current allocations */
+ u64 bytes_readonly; /* total bytes that are read only */
+
+ /* delalloc accounting */
+ u64 bytes_delalloc; /* number of bytes reserved for allocation,
+ this space is not necessarily reserved yet
+ by the allocator */
+ u64 bytes_may_use; /* number of bytes that may be used for
+ delalloc */
+
+ int full; /* indicates that we cannot allocate any more
+ chunks for this space */
+ int force_alloc; /* set if we need to force a chunk alloc for
+ this space */
+
struct list_head list;
/* for block groups in our same type */
int btrfs_cleanup_reloc_trees(struct btrfs_root *root);
int btrfs_reloc_clone_csums(struct inode *inode, u64 file_pos, u64 len);
u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags);
+void btrfs_set_inode_space_info(struct btrfs_root *root, struct inode *ionde);
+int btrfs_check_metadata_free_space(struct btrfs_root *root);
+int btrfs_check_data_free_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes);
+void btrfs_free_reserved_data_space(struct btrfs_root *root,
+ struct inode *inode, u64 bytes);
+void btrfs_delalloc_reserve_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes);
+void btrfs_delalloc_free_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes);
/* ctree.c */
int btrfs_previous_item(struct btrfs_root *root,
struct btrfs_path *path, u64 min_objectid,
unsigned long btrfs_force_ra(struct address_space *mapping,
struct file_ra_state *ra, struct file *file,
pgoff_t offset, pgoff_t last_index);
-int btrfs_check_free_space(struct btrfs_root *root, u64 num_required,
- int for_del);
int btrfs_page_mkwrite(struct vm_area_struct *vma, struct page *page);
int btrfs_readpage(struct file *file, struct page *page);
void btrfs_delete_inode(struct inode *inode);
u64 bytenr, u64 num_bytes, int alloc,
int mark_free);
+static int do_chunk_alloc(struct btrfs_trans_handle *trans,
+ struct btrfs_root *extent_root, u64 alloc_bytes,
+ u64 flags, int force);
+
static int block_group_bits(struct btrfs_block_group_cache *cache, u64 bits)
{
return (cache->flags & bits) == bits;
found->bytes_pinned = 0;
found->bytes_reserved = 0;
found->bytes_readonly = 0;
+ found->bytes_delalloc = 0;
found->full = 0;
found->force_alloc = 0;
*space_info = found;
return flags;
}
+static u64 btrfs_get_alloc_profile(struct btrfs_root *root, u64 data)
+{
+ struct btrfs_fs_info *info = root->fs_info;
+ u64 alloc_profile;
+
+ if (data) {
+ alloc_profile = info->avail_data_alloc_bits &
+ info->data_alloc_profile;
+ data = BTRFS_BLOCK_GROUP_DATA | alloc_profile;
+ } else if (root == root->fs_info->chunk_root) {
+ alloc_profile = info->avail_system_alloc_bits &
+ info->system_alloc_profile;
+ data = BTRFS_BLOCK_GROUP_SYSTEM | alloc_profile;
+ } else {
+ alloc_profile = info->avail_metadata_alloc_bits &
+ info->metadata_alloc_profile;
+ data = BTRFS_BLOCK_GROUP_METADATA | alloc_profile;
+ }
+
+ return btrfs_reduce_alloc_profile(root, data);
+}
+
+void btrfs_set_inode_space_info(struct btrfs_root *root, struct inode *inode)
+{
+ u64 alloc_target;
+
+ alloc_target = btrfs_get_alloc_profile(root, 1);
+ BTRFS_I(inode)->space_info = __find_space_info(root->fs_info,
+ alloc_target);
+}
+
+/*
+ * for now this just makes sure we have at least 5% of our metadata space free
+ * for use.
+ */
+int btrfs_check_metadata_free_space(struct btrfs_root *root)
+{
+ struct btrfs_fs_info *info = root->fs_info;
+ struct btrfs_space_info *meta_sinfo;
+ u64 alloc_target, thresh;
+ int committed = 0, ret;
+
+ /* get the space info for where the metadata will live */
+ alloc_target = btrfs_get_alloc_profile(root, 0);
+ meta_sinfo = __find_space_info(info, alloc_target);
+
+again:
+ spin_lock(&meta_sinfo->lock);
+ if (!meta_sinfo->full)
+ thresh = meta_sinfo->total_bytes * 80;
+ else
+ thresh = meta_sinfo->total_bytes * 95;
+
+ do_div(thresh, 100);
+
+ if (meta_sinfo->bytes_used + meta_sinfo->bytes_reserved +
+ meta_sinfo->bytes_pinned + meta_sinfo->bytes_readonly > thresh) {
+ struct btrfs_trans_handle *trans;
+ if (!meta_sinfo->full) {
+ meta_sinfo->force_alloc = 1;
+ spin_unlock(&meta_sinfo->lock);
+
+ trans = btrfs_start_transaction(root, 1);
+ if (!trans)
+ return -ENOMEM;
+
+ ret = do_chunk_alloc(trans, root->fs_info->extent_root,
+ 2 * 1024 * 1024, alloc_target, 0);
+ btrfs_end_transaction(trans, root);
+ goto again;
+ }
+ spin_unlock(&meta_sinfo->lock);
+
+ if (!committed) {
+ committed = 1;
+ trans = btrfs_join_transaction(root, 1);
+ if (!trans)
+ return -ENOMEM;
+ ret = btrfs_commit_transaction(trans, root);
+ if (ret)
+ return ret;
+ goto again;
+ }
+ return -ENOSPC;
+ }
+ spin_unlock(&meta_sinfo->lock);
+
+ return 0;
+}
+
+/*
+ * This will check the space that the inode allocates from to make sure we have
+ * enough space for bytes.
+ */
+int btrfs_check_data_free_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes)
+{
+ struct btrfs_space_info *data_sinfo;
+ int ret = 0, committed = 0;
+
+ /* make sure bytes are sectorsize aligned */
+ bytes = (bytes + root->sectorsize - 1) & ~((u64)root->sectorsize - 1);
+
+ data_sinfo = BTRFS_I(inode)->space_info;
+again:
+ /* make sure we have enough space to handle the data first */
+ spin_lock(&data_sinfo->lock);
+ if (data_sinfo->total_bytes - data_sinfo->bytes_used -
+ data_sinfo->bytes_delalloc - data_sinfo->bytes_reserved -
+ data_sinfo->bytes_pinned - data_sinfo->bytes_readonly -
+ data_sinfo->bytes_may_use < bytes) {
+ struct btrfs_trans_handle *trans;
+
+ /*
+ * if we don't have enough free bytes in this space then we need
+ * to alloc a new chunk.
+ */
+ if (!data_sinfo->full) {
+ u64 alloc_target;
+
+ data_sinfo->force_alloc = 1;
+ spin_unlock(&data_sinfo->lock);
+
+ alloc_target = btrfs_get_alloc_profile(root, 1);
+ trans = btrfs_start_transaction(root, 1);
+ if (!trans)
+ return -ENOMEM;
+
+ ret = do_chunk_alloc(trans, root->fs_info->extent_root,
+ bytes + 2 * 1024 * 1024,
+ alloc_target, 0);
+ btrfs_end_transaction(trans, root);
+ if (ret)
+ return ret;
+ goto again;
+ }
+ spin_unlock(&data_sinfo->lock);
+
+ /* commit the current transaction and try again */
+ if (!committed) {
+ committed = 1;
+ trans = btrfs_join_transaction(root, 1);
+ if (!trans)
+ return -ENOMEM;
+ ret = btrfs_commit_transaction(trans, root);
+ if (ret)
+ return ret;
+ goto again;
+ }
+
+ printk(KERN_ERR "no space left, need %llu, %llu delalloc bytes"
+ ", %llu bytes_used, %llu bytes_reserved, "
+ "%llu bytes_pinned, %llu bytes_readonly, %llu may use"
+ "%llu total\n", bytes, data_sinfo->bytes_delalloc,
+ data_sinfo->bytes_used, data_sinfo->bytes_reserved,
+ data_sinfo->bytes_pinned, data_sinfo->bytes_readonly,
+ data_sinfo->bytes_may_use, data_sinfo->total_bytes);
+ return -ENOSPC;
+ }
+ data_sinfo->bytes_may_use += bytes;
+ BTRFS_I(inode)->reserved_bytes += bytes;
+ spin_unlock(&data_sinfo->lock);
+
+ return btrfs_check_metadata_free_space(root);
+}
+
+/*
+ * if there was an error for whatever reason after calling
+ * btrfs_check_data_free_space, call this so we can cleanup the counters.
+ */
+void btrfs_free_reserved_data_space(struct btrfs_root *root,
+ struct inode *inode, u64 bytes)
+{
+ struct btrfs_space_info *data_sinfo;
+
+ /* make sure bytes are sectorsize aligned */
+ bytes = (bytes + root->sectorsize - 1) & ~((u64)root->sectorsize - 1);
+
+ data_sinfo = BTRFS_I(inode)->space_info;
+ spin_lock(&data_sinfo->lock);
+ data_sinfo->bytes_may_use -= bytes;
+ BTRFS_I(inode)->reserved_bytes -= bytes;
+ spin_unlock(&data_sinfo->lock);
+}
+
+/* called when we are adding a delalloc extent to the inode's io_tree */
+void btrfs_delalloc_reserve_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes)
+{
+ struct btrfs_space_info *data_sinfo;
+
+ /* get the space info for where this inode will be storing its data */
+ data_sinfo = BTRFS_I(inode)->space_info;
+
+ /* make sure we have enough space to handle the data first */
+ spin_lock(&data_sinfo->lock);
+ data_sinfo->bytes_delalloc += bytes;
+
+ /*
+ * we are adding a delalloc extent without calling
+ * btrfs_check_data_free_space first. This happens on a weird
+ * writepage condition, but shouldn't hurt our accounting
+ */
+ if (unlikely(bytes > BTRFS_I(inode)->reserved_bytes)) {
+ data_sinfo->bytes_may_use -= BTRFS_I(inode)->reserved_bytes;
+ BTRFS_I(inode)->reserved_bytes = 0;
+ } else {
+ data_sinfo->bytes_may_use -= bytes;
+ BTRFS_I(inode)->reserved_bytes -= bytes;
+ }
+
+ spin_unlock(&data_sinfo->lock);
+}
+
+/* called when we are clearing an delalloc extent from the inode's io_tree */
+void btrfs_delalloc_free_space(struct btrfs_root *root, struct inode *inode,
+ u64 bytes)
+{
+ struct btrfs_space_info *info;
+
+ info = BTRFS_I(inode)->space_info;
+
+ spin_lock(&info->lock);
+ info->bytes_delalloc -= bytes;
+ spin_unlock(&info->lock);
+}
+
static int do_chunk_alloc(struct btrfs_trans_handle *trans,
struct btrfs_root *extent_root, u64 alloc_bytes,
u64 flags, int force)
(unsigned long long)(info->total_bytes - info->bytes_used -
info->bytes_pinned - info->bytes_reserved),
(info->full) ? "" : "not ");
+ printk(KERN_INFO "space_info total=%llu, pinned=%llu, delalloc=%llu,"
+ " may_use=%llu, used=%llu\n", info->total_bytes,
+ info->bytes_pinned, info->bytes_delalloc, info->bytes_may_use,
+ info->bytes_used);
down_read(&info->groups_sem);
list_for_each_entry(cache, &info->block_groups, list) {
{
int ret;
u64 search_start = 0;
- u64 alloc_profile;
struct btrfs_fs_info *info = root->fs_info;
- if (data) {
- alloc_profile = info->avail_data_alloc_bits &
- info->data_alloc_profile;
- data = BTRFS_BLOCK_GROUP_DATA | alloc_profile;
- } else if (root == root->fs_info->chunk_root) {
- alloc_profile = info->avail_system_alloc_bits &
- info->system_alloc_profile;
- data = BTRFS_BLOCK_GROUP_SYSTEM | alloc_profile;
- } else {
- alloc_profile = info->avail_metadata_alloc_bits &
- info->metadata_alloc_profile;
- data = BTRFS_BLOCK_GROUP_METADATA | alloc_profile;
- }
+ data = btrfs_get_alloc_profile(root, data);
again:
- data = btrfs_reduce_alloc_profile(root, data);
/*
* the only place that sets empty_size is btrfs_realloc_node, which
* is not called recursively on allocations
WARN_ON(num_pages > nrptrs);
memset(pages, 0, sizeof(struct page *) * nrptrs);
- ret = btrfs_check_free_space(root, write_bytes, 0);
+ ret = btrfs_check_data_free_space(root, inode, write_bytes);
if (ret)
goto out;
ret = prepare_pages(root, file, pages, num_pages,
pos, first_index, last_index,
write_bytes);
- if (ret)
+ if (ret) {
+ btrfs_free_reserved_data_space(root, inode,
+ write_bytes);
goto out;
+ }
ret = btrfs_copy_from_user(pos, num_pages,
write_bytes, pages, buf);
if (ret) {
+ btrfs_free_reserved_data_space(root, inode,
+ write_bytes);
btrfs_drop_pages(pages, num_pages);
goto out;
}
ret = dirty_and_release_pages(NULL, root, file, pages,
num_pages, pos, write_bytes);
btrfs_drop_pages(pages, num_pages);
- if (ret)
+ if (ret) {
+ btrfs_free_reserved_data_space(root, inode,
+ write_bytes);
goto out;
+ }
if (will_write) {
btrfs_fdatawrite_range(inode->i_mapping, pos,
}
out:
mutex_unlock(&inode->i_mutex);
+ if (ret)
+ err = ret;
out_nolock:
kfree(pages);
return err;
}
-/*
- * a very lame attempt at stopping writes when the FS is 85% full. There
- * are countless ways this is incorrect, but it is better than nothing.
- */
-int btrfs_check_free_space(struct btrfs_root *root, u64 num_required,
- int for_del)
-{
- u64 total;
- u64 used;
- u64 thresh;
- int ret = 0;
-
- spin_lock(&root->fs_info->delalloc_lock);
- total = btrfs_super_total_bytes(&root->fs_info->super_copy);
- used = btrfs_super_bytes_used(&root->fs_info->super_copy);
- if (for_del)
- thresh = total * 90;
- else
- thresh = total * 85;
-
- do_div(thresh, 100);
-
- if (used + root->fs_info->delalloc_bytes + num_required > thresh)
- ret = -ENOSPC;
- spin_unlock(&root->fs_info->delalloc_lock);
- return ret;
-}
-
/*
* this does all the hard work for inserting an inline extent into
* the btree. The caller should have done a btrfs_drop_extents so that
*/
if (!(old & EXTENT_DELALLOC) && (bits & EXTENT_DELALLOC)) {
struct btrfs_root *root = BTRFS_I(inode)->root;
+ btrfs_delalloc_reserve_space(root, inode, end - start + 1);
spin_lock(&root->fs_info->delalloc_lock);
BTRFS_I(inode)->delalloc_bytes += end - start + 1;
root->fs_info->delalloc_bytes += end - start + 1;
(unsigned long long)end - start + 1,
(unsigned long long)
root->fs_info->delalloc_bytes);
+ btrfs_delalloc_free_space(root, inode, (u64)-1);
root->fs_info->delalloc_bytes = 0;
BTRFS_I(inode)->delalloc_bytes = 0;
} else {
+ btrfs_delalloc_free_space(root, inode,
+ end - start + 1);
root->fs_info->delalloc_bytes -= end - start + 1;
BTRFS_I(inode)->delalloc_bytes -= end - start + 1;
}
root = BTRFS_I(dir)->root;
- ret = btrfs_check_free_space(root, 1, 1);
- if (ret)
- goto fail;
-
trans = btrfs_start_transaction(root, 1);
btrfs_set_trans_block_group(trans, dir);
nr = trans->blocks_used;
btrfs_end_transaction_throttle(trans, root);
-fail:
btrfs_btree_balance_dirty(root, nr);
return ret;
}
return -ENOTEMPTY;
}
- ret = btrfs_check_free_space(root, 1, 1);
- if (ret)
- goto fail;
-
trans = btrfs_start_transaction(root, 1);
btrfs_set_trans_block_group(trans, dir);
fail_trans:
nr = trans->blocks_used;
ret = btrfs_end_transaction_throttle(trans, root);
-fail:
btrfs_btree_balance_dirty(root, nr);
if (ret && !err)
if (size <= hole_start)
return 0;
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
return err;
bi->last_trans = 0;
bi->logged_trans = 0;
bi->delalloc_bytes = 0;
+ bi->reserved_bytes = 0;
bi->disk_i_size = 0;
bi->flags = 0;
bi->index_cnt = (u64)-1;
inode->i_ino = args->ino;
init_btrfs_i(inode);
BTRFS_I(inode)->root = args->root;
+ btrfs_set_inode_space_info(args->root, inode);
return 0;
}
BTRFS_I(inode)->index_cnt = 2;
BTRFS_I(inode)->root = root;
BTRFS_I(inode)->generation = trans->transid;
+ btrfs_set_inode_space_info(root, inode);
if (mode & S_IFDIR)
owner = 0;
if (!new_valid_dev(rdev))
return -EINVAL;
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
goto fail;
u64 objectid;
u64 index = 0;
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
goto fail;
trans = btrfs_start_transaction(root, 1);
return -ENOENT;
btrfs_inc_nlink(inode);
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
goto fail;
err = btrfs_set_inode_index(dir, &index);
u64 index = 0;
unsigned long nr = 1;
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
goto out_unlock;
u64 page_start;
u64 page_end;
- ret = btrfs_check_free_space(root, PAGE_CACHE_SIZE, 0);
+ ret = btrfs_check_data_free_space(root, inode, PAGE_CACHE_SIZE);
if (ret)
goto out;
if ((page->mapping != inode->i_mapping) ||
(page_start >= size)) {
+ btrfs_free_reserved_data_space(root, inode, PAGE_CACHE_SIZE);
/* page got truncated out from underneath us */
goto out_unlock;
}
if (old_inode->i_ino == BTRFS_FIRST_FREE_OBJECTID)
return -EXDEV;
- ret = btrfs_check_free_space(root, 1, 0);
+ ret = btrfs_check_metadata_free_space(root);
if (ret)
goto out_unlock;
if (name_len > BTRFS_MAX_INLINE_DATA_SIZE(root))
return -ENAMETOOLONG;
- err = btrfs_check_free_space(root, 1, 0);
+ err = btrfs_check_metadata_free_space(root);
if (err)
goto out_fail;
u64 index = 0;
unsigned long nr = 1;
- ret = btrfs_check_free_space(root, 1, 0);
+ ret = btrfs_check_metadata_free_space(root);
if (ret)
goto fail_commit;
if (!root->ref_cows)
return -EINVAL;
- ret = btrfs_check_free_space(root, 1, 0);
+ ret = btrfs_check_metadata_free_space(root);
if (ret)
goto fail_unlock;
unsigned long i;
int ret;
- ret = btrfs_check_free_space(root, inode->i_size, 0);
+ ret = btrfs_check_data_free_space(root, inode, inode->i_size);
if (ret)
return -ENOSPC;
/* 0x00 */
COMPATIBLE_IOCTL(FIBMAP)
COMPATIBLE_IOCTL(FIGETBSZ)
+/* 'X' - originally XFS but some now in the VFS */
+COMPATIBLE_IOCTL(FIFREEZE)
+COMPATIBLE_IOCTL(FITHAW)
/* RAID */
COMPATIBLE_IOCTL(RAID_VERSION)
COMPATIBLE_IOCTL(GET_ARRAY_INFO)
iput(inode);
return res;
}
-EXPORT_SYMBOL_GPL(d_obtain_alias);
+EXPORT_SYMBOL(d_obtain_alias);
/**
* d_splice_alias - splice a disconnected dentry into the tree if one exists
*/
int ext4_should_retry_alloc(struct super_block *sb, int *retries)
{
- if (!ext4_has_free_blocks(EXT4_SB(sb), 1) || (*retries)++ > 3)
+ if (!ext4_has_free_blocks(EXT4_SB(sb), 1) ||
+ (*retries)++ > 3 ||
+ !EXT4_SB(sb)->s_journal)
return 0;
jbd_debug(1, "%s: retrying operation after ENOSPC\n", sb->s_id);
ext4_journal_stop(handle);
- if (mpd.retval == -ENOSPC) {
+ if ((mpd.retval == -ENOSPC) && sbi->s_journal) {
/* commit the transaction which would
* free blocks released in the transaction
* and try again
/* Journal blocked and flushed, clear needs_recovery flag. */
EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
- ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1);
error = ext4_commit_super(sb, EXT4_SB(sb)->s_es, 1);
if (error)
goto out;
spin_unlock(&c->erase_completion_lock);
- /* This thread is purely an optimisation. But if it runs when
- other things could be running, it actually makes things a
- lot worse. Use yield() and put it at the back of the runqueue
- every time. Especially during boot, pulling an inode in
- with read_inode() is much preferable to having the GC thread
- get there first. */
- yield();
+ /* Problem - immediately after bootup, the GCD spends a lot
+ * of time in places like jffs2_kill_fragtree(); so much so
+ * that userspace processes (like gdm and X) are starved
+ * despite plenty of cond_resched()s and renicing. Yield()
+ * doesn't help, either (presumably because userspace and GCD
+ * are generally competing for a higher latency resource -
+ * disk).
+ * This forces the GCD to slow the hell down. Pulling an
+ * inode in with read_inode() is much preferable to having
+ * the GC thread get there first. */
+ schedule_timeout_interruptible(msecs_to_jiffies(50));
/* Put_super will send a SIGKILL and then wait on the sem.
*/
struct jffs2_tmp_dnode_info *tn)
{
uint32_t fn_end = tn->fn->ofs + tn->fn->size;
- struct jffs2_tmp_dnode_info *this;
+ struct jffs2_tmp_dnode_info *this, *ptn;
dbg_readinode("insert fragment %#04x-%#04x, ver %u at %08x\n", tn->fn->ofs, fn_end, tn->version, ref_offset(tn->fn->raw));
if (this) {
/* If the node is coincident with another at a lower address,
back up until the other node is found. It may be relevant */
- while (this->overlapped)
- this = tn_prev(this);
-
- /* First node should never be marked overlapped */
- BUG_ON(!this);
+ while (this->overlapped) {
+ ptn = tn_prev(this);
+ if (!ptn) {
+ /*
+ * We killed a node which set the overlapped
+ * flags during the scan. Fix it up.
+ */
+ this->overlapped = 0;
+ break;
+ }
+ this = ptn;
+ }
dbg_readinode("'this' found %#04x-%#04x (%s)\n", this->fn->ofs, this->fn->ofs + this->fn->size, this->fn ? "data" : "hole");
}
}
if (!this->overlapped)
break;
- this = tn_prev(this);
+
+ ptn = tn_prev(this);
+ if (!ptn) {
+ /*
+ * We killed a node which set the overlapped
+ * flags during the scan. Fix it up.
+ */
+ this->overlapped = 0;
+ break;
+ }
+ this = ptn;
}
}
eat_last(&rii->tn_root, &last->rb);
ver_insert(&ver_root, last);
- if (unlikely(last->overlapped))
- continue;
+ if (unlikely(last->overlapped)) {
+ if (pen)
+ continue;
+ /*
+ * We killed a node which set the overlapped
+ * flags during the scan. Fix it up.
+ */
+ last->overlapped = 0;
+ }
/* Now we have a bunch of nodes in reverse version
order, in the tree at ver_root. Most of the time,
return ret;
}
+static int ocfs2_replace_extent_rec(struct inode *inode,
+ handle_t *handle,
+ struct ocfs2_path *path,
+ struct ocfs2_extent_list *el,
+ int split_index,
+ struct ocfs2_extent_rec *split_rec)
+{
+ int ret;
+
+ ret = ocfs2_path_bh_journal_access(handle, inode, path,
+ path_num_items(path) - 1);
+ if (ret) {
+ mlog_errno(ret);
+ goto out;
+ }
+
+ el->l_recs[split_index] = *split_rec;
+
+ ocfs2_journal_dirty(handle, path_leaf_bh(path));
+out:
+ return ret;
+}
+
/*
* Mark part or all of the extent record at split_index in the leaf
* pointed to by path as written. This removes the unwritten
if (ctxt.c_contig_type == CONTIG_NONE) {
if (ctxt.c_split_covers_rec)
- el->l_recs[split_index] = *split_rec;
+ ret = ocfs2_replace_extent_rec(inode, handle,
+ path, el,
+ split_index, split_rec);
else
ret = ocfs2_split_and_insert(inode, handle, path, et,
&last_eb_bh, split_index,
if (!mle) {
if (res->owner != DLM_LOCK_RES_OWNER_UNKNOWN &&
res->owner != assert->node_idx) {
- mlog(ML_ERROR, "assert_master from "
- "%u, but current owner is "
- "%u! (%.*s)\n",
- assert->node_idx, res->owner,
- namelen, name);
- goto kill;
+ mlog(ML_ERROR, "DIE! Mastery assert from %u, "
+ "but current owner is %u! (%.*s)\n",
+ assert->node_idx, res->owner, namelen,
+ name);
+ __dlm_print_one_lock_resource(res);
+ BUG();
}
} else if (mle->type != DLM_MLE_MIGRATION) {
if (res->owner != DLM_LOCK_RES_OWNER_UNKNOWN) {
spin_lock(&res->spinlock);
/* This ensures that clear refmap is sent after the set */
- __dlm_wait_on_lockres_flags(res, (DLM_LOCK_RES_SETREF_INPROG |
- DLM_LOCK_RES_MIGRATING));
+ __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG);
spin_unlock(&res->spinlock);
/* clear our bit from the master's refmap, ignore errors */
else
BUG_ON(res->owner == dlm->node_num);
- spin_lock(&dlm->spinlock);
+ spin_lock(&dlm->ast_lock);
/* We want to be sure that we're not freeing a lock
* that still has AST's pending... */
in_use = !list_empty(&lock->ast_list);
- spin_unlock(&dlm->spinlock);
+ spin_unlock(&dlm->ast_lock);
if (in_use) {
mlog(ML_ERROR, "lockres %.*s: Someone is calling dlmunlock "
"while waiting for an ast!", res->lockname.len,
struct ocfs2_lock_res *lockres);
static inline void ocfs2_recover_from_dlm_error(struct ocfs2_lock_res *lockres,
int convert);
-#define ocfs2_log_dlm_error(_func, _err, _lockres) do { \
- mlog(ML_ERROR, "DLM error %d while calling %s on resource %s\n", \
- _err, _func, _lockres->l_name); \
+#define ocfs2_log_dlm_error(_func, _err, _lockres) do { \
+ if ((_lockres)->l_type != OCFS2_LOCK_TYPE_DENTRY) \
+ mlog(ML_ERROR, "DLM error %d while calling %s on resource %s\n", \
+ _err, _func, _lockres->l_name); \
+ else \
+ mlog(ML_ERROR, "DLM error %d while calling %s on resource %.*s%08x\n", \
+ _err, _func, OCFS2_DENTRY_LOCK_INO_START - 1, (_lockres)->l_name, \
+ (unsigned int)ocfs2_get_dentry_lock_ino(_lockres)); \
} while (0)
static int ocfs2_downconvert_thread(void *arg);
static void ocfs2_downconvert_on_unlock(struct ocfs2_super *osb,
struct ocfs2_node_map osb_recovering_orphan_dirs;
unsigned int *osb_orphan_wipes;
wait_queue_head_t osb_wipe_event;
+
+ /* used to protect metaecc calculation check of xattr. */
+ spinlock_t osb_xattr_lock;
};
#define OCFS2_SB(sb) ((struct ocfs2_super *)(sb)->s_fs_info)
unlock_buffer(*bh);
ll_rw_block(READ, 1, bh);
wait_on_buffer(*bh);
+ if (!buffer_uptodate(*bh)) {
+ mlog_errno(-EIO);
+ brelse(*bh);
+ *bh = NULL;
+ return -EIO;
+ }
+
return 0;
}
INIT_LIST_HEAD(&osb->blocked_lock_list);
osb->blocked_lock_count = 0;
spin_lock_init(&osb->osb_lock);
+ spin_lock_init(&osb->osb_xattr_lock);
ocfs2_init_inode_steal_slot(osb);
atomic_set(&osb->alloc_stats.moves, 0);
#define OCFS2_XATTR_ROOT_SIZE (sizeof(struct ocfs2_xattr_def_value_root))
#define OCFS2_XATTR_INLINE_SIZE 80
+#define OCFS2_XATTR_HEADER_GAP 4
#define OCFS2_XATTR_FREE_IN_IBODY (OCFS2_MIN_XATTR_INLINE_SIZE \
- sizeof(struct ocfs2_xattr_header) \
- - sizeof(__u32))
+ - OCFS2_XATTR_HEADER_GAP)
#define OCFS2_XATTR_FREE_IN_BLOCK(ptr) ((ptr)->i_sb->s_blocksize \
- sizeof(struct ocfs2_xattr_block) \
- sizeof(struct ocfs2_xattr_header) \
- - sizeof(__u32))
+ - OCFS2_XATTR_HEADER_GAP)
static struct ocfs2_xattr_def_value_root def_xv = {
.xv.xr_list.l_count = cpu_to_le16(1),
bucket->bu_blocks, bucket->bu_bhs, 0,
NULL);
if (!rc) {
+ spin_lock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock);
rc = ocfs2_validate_meta_ecc_bhs(bucket->bu_inode->i_sb,
bucket->bu_bhs,
bucket->bu_blocks,
&bucket_xh(bucket)->xh_check);
+ spin_unlock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock);
if (rc)
mlog_errno(rc);
}
{
int i;
+ spin_lock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock);
ocfs2_compute_meta_ecc_bhs(bucket->bu_inode->i_sb,
bucket->bu_bhs, bucket->bu_blocks,
&bucket_xh(bucket)->xh_check);
+ spin_unlock(&OCFS2_SB(bucket->bu_inode->i_sb)->osb_xattr_lock);
for (i = 0; i < bucket->bu_blocks; i++)
ocfs2_journal_dirty(handle, bucket->bu_bhs[i]);
last += 1;
}
- free = min_offs - ((void *)last - xs->base) - sizeof(__u32);
+ free = min_offs - ((void *)last - xs->base) - OCFS2_XATTR_HEADER_GAP;
if (free < 0)
return -EIO;
last += 1;
}
- free = min_offs - ((void *)last - xs->base) - sizeof(__u32);
+ free = min_offs - ((void *)last - xs->base) - OCFS2_XATTR_HEADER_GAP;
if (free < 0)
return 0;
if (!ret) {
/* Update inode ctime. */
- ret = ocfs2_journal_access(ctxt->handle, inode, xis->inode_bh,
- OCFS2_JOURNAL_ACCESS_WRITE);
+ ret = ocfs2_journal_access_di(ctxt->handle, inode,
+ xis->inode_bh,
+ OCFS2_JOURNAL_ACCESS_WRITE);
if (ret) {
mlog_errno(ret);
goto out;
xh_free_start = le16_to_cpu(xh->xh_free_start);
header_size = sizeof(struct ocfs2_xattr_header) +
count * sizeof(struct ocfs2_xattr_entry);
- max_free = OCFS2_XATTR_BUCKET_SIZE -
- le16_to_cpu(xh->xh_name_value_len) - header_size;
+ max_free = OCFS2_XATTR_BUCKET_SIZE - header_size -
+ le16_to_cpu(xh->xh_name_value_len) - OCFS2_XATTR_HEADER_GAP;
mlog_bug_on_msg(header_size > blocksize, "bucket %llu has header size "
"of %u which exceed block size\n",
need = 0;
}
- free = xh_free_start - header_size;
+ free = xh_free_start - header_size - OCFS2_XATTR_HEADER_GAP;
/*
* We need to make sure the new name/value pair
* can exist in the same block.
}
xh_free_start = le16_to_cpu(xh->xh_free_start);
- free = xh_free_start - header_size;
+ free = xh_free_start - header_size
+ - OCFS2_XATTR_HEADER_GAP;
if (xh_free_start % blocksize < need)
free -= xh_free_start % blocksize;
void (*mode_set)(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
+ struct drm_crtc *(*get_crtc)(struct drm_encoder *encoder);
/* detect for DAC style encoders */
enum drm_connector_status (*detect)(struct drm_encoder *encoder,
struct drm_connector *connector);
u8 hsync_pulse_width_lo;
u8 vsync_pulse_width_lo:4;
u8 vsync_offset_lo:4;
- u8 hsync_pulse_width_hi:2;
- u8 hsync_offset_hi:2;
u8 vsync_pulse_width_hi:2;
u8 vsync_offset_hi:2;
+ u8 hsync_pulse_width_hi:2;
+ u8 hsync_offset_hi:2;
u8 width_mm_lo;
u8 height_mm_lo;
u8 height_mm_hi:4;
header-y += cgroupstats.h
header-y += cramfs_fs.h
header-y += cycx_cfm.h
+header-y += dcbnl.h
header-y += dlmconstants.h
header-y += dlm_device.h
header-y += dlm_netlink.h
};
/* This should not be used directly - use rq_for_each_segment */
+#define for_each_bio(_bio) \
+ for (; _bio; _bio = _bio->bi_next)
#define __rq_for_each_bio(_bio, rq) \
if ((rq->bio)) \
for (_bio = (rq)->bio; _bio; _bio = _bio->bi_next)
#ifndef __LINUX_DCBNL_H__
#define __LINUX_DCBNL_H__
+#include <linux/types.h>
+
#define DCB_PROTO_VERSION 1
struct dcbmsg {
- unsigned char dcb_family;
+ __u8 dcb_family;
__u8 cmd;
__u16 dcb_pad;
};
#define to_ide_device(dev) container_of(dev, ide_drive_t, gendev)
#define to_ide_drv(obj, cont_type) \
- container_of(obj, struct cont_type, kref)
+ container_of(obj, struct cont_type, dev)
#define ide_drv_g(disk, cont_type) \
container_of((disk)->private_data, struct cont_type, driver)
/* FSTS_REG */
#define DMA_FSTS_PPF ((u32)2)
#define DMA_FSTS_PFO ((u32)1)
+#define DMA_FSTS_IQE (1 << 4)
#define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff)
/* FRCD_REG, 32 bits access */
unsigned int size_order, u64 type,
int non_present_entry_flush);
-extern void qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
+extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
extern void *intel_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t);
extern void intel_free_coherent(struct device *, size_t, void *, dma_addr_t);
* See Documentation/io_mapping.txt
*/
-/* this struct isn't actually defined anywhere */
-struct io_mapping;
-
#ifdef CONFIG_HAVE_ATOMIC_IOMAP
+struct io_mapping {
+ resource_size_t base;
+ unsigned long size;
+ pgprot_t prot;
+};
+
/*
* For small address space machines, mapping large objects
* into the kernel virtual space isn't practical. Where
*/
static inline struct io_mapping *
-io_mapping_create_wc(unsigned long base, unsigned long size)
+io_mapping_create_wc(resource_size_t base, unsigned long size)
{
- return (struct io_mapping *) base;
+ struct io_mapping *iomap;
+
+ if (!is_io_mapping_possible(base, size))
+ return NULL;
+
+ iomap = kmalloc(sizeof(*iomap), GFP_KERNEL);
+ if (!iomap)
+ return NULL;
+
+ iomap->base = base;
+ iomap->size = size;
+ iomap->prot = pgprot_writecombine(__pgprot(__PAGE_KERNEL));
+ return iomap;
}
static inline void
io_mapping_free(struct io_mapping *mapping)
{
+ kfree(mapping);
}
/* Atomic map/unmap */
static inline void *
io_mapping_map_atomic_wc(struct io_mapping *mapping, unsigned long offset)
{
- offset += (unsigned long) mapping;
- return iomap_atomic_prot_pfn(offset >> PAGE_SHIFT, KM_USER0,
- __pgprot(__PAGE_KERNEL_WC));
+ resource_size_t phys_addr;
+ unsigned long pfn;
+
+ BUG_ON(offset >= mapping->size);
+ phys_addr = mapping->base + offset;
+ pfn = (unsigned long) (phys_addr >> PAGE_SHIFT);
+ return iomap_atomic_prot_pfn(pfn, KM_USER0, mapping->prot);
}
static inline void
static inline void *
io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset)
{
- offset += (unsigned long) mapping;
- return ioremap_wc(offset, PAGE_SIZE);
+ resource_size_t phys_addr;
+
+ BUG_ON(offset >= mapping->size);
+ phys_addr = mapping->base + offset;
+
+ return ioremap_wc(phys_addr, PAGE_SIZE);
}
static inline void
#else
+/* this struct isn't actually defined anywhere */
+struct io_mapping;
+
/* Create the io_mapping object*/
static inline struct io_mapping *
-io_mapping_create_wc(unsigned long base, unsigned long size)
+io_mapping_create_wc(resource_size_t base, unsigned long size)
{
return (struct io_mapping *) ioremap_wc(base, size);
}
#define _XT_NFLOG_TARGET
#define XT_NFLOG_DEFAULT_GROUP 0x1
-#define XT_NFLOG_DEFAULT_THRESHOLD 1
+#define XT_NFLOG_DEFAULT_THRESHOLD 0
#define XT_NFLOG_MASK 0x0
#define rcu_enter_nohz() do { } while (0)
#define rcu_exit_nohz() do { } while (0)
+/* A context switch is a grace period for rcuclassic. */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1;
+}
+
#endif /* __LINUX_RCUCLASSIC_H */
void (*func)(struct rcu_head *head);
};
+/* Internal to kernel, but needed by rcupreempt.h. */
+extern int rcu_scheduler_active;
+
#if defined(CONFIG_CLASSIC_RCU)
#include <linux/rcuclassic.h>
#elif defined(CONFIG_TREE_RCU)
/* Internal to kernel */
extern void rcu_init(void);
+extern void rcu_scheduler_starting(void);
extern int rcu_needs_cpu(int cpu);
#endif /* __LINUX_RCUPDATE_H */
#define rcu_exit_nohz() do { } while (0)
#endif /* CONFIG_NO_HZ */
+/*
+ * A context switch is a grace period for rcupreempt synchronize_rcu()
+ * only during early boot, before the scheduler has been initialized.
+ * So, how the heck do we get a context switch? Well, if the caller
+ * invokes synchronize_rcu(), they are willing to accept a context
+ * switch, so we simply pretend that one happened.
+ *
+ * After boot, there might be a blocked or preempted task in an RCU
+ * read-side critical section, so we cannot then take the fastpath.
+ */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1 && !rcu_scheduler_active;
+}
+
#endif /* __LINUX_RCUPREEMPT_H */
}
#endif /* CONFIG_NO_HZ */
+/* A context switch is a grace period for rcutree. */
+static inline int rcu_blocking_is_gp(void)
+{
+ return num_online_cpus() == 1;
+}
+
#endif /* __LINUX_RCUTREE_H */
extern int sched_group_set_rt_period(struct task_group *tg,
long rt_period_us);
extern long sched_group_rt_period(struct task_group *tg);
+extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
#endif
#endif
+extern int task_can_switch_user(struct user_struct *up,
+ struct task_struct *tsk);
+
#ifdef CONFIG_TASK_XACCT
static inline void add_rchar(struct task_struct *tsk, ssize_t amt)
{
struct kref kref;
struct hlist_head uidhash_table[UIDHASH_SZ];
struct user_struct *creator;
+ struct work_struct destroyer;
};
extern struct user_namespace init_user_ns;
struct nf_conn *ct = (struct nf_conn *)skb->nfct;
int ret = NF_ACCEPT;
- if (ct) {
+ if (ct && ct != &nf_conntrack_untracked) {
if (!nf_ct_is_confirmed(ct) && !nf_ct_is_dying(ct))
ret = __nf_conntrack_confirm(skb);
nf_ct_deliver_cached_events(ct);
extern void tc_init(void);
#endif
-enum system_states system_state;
+enum system_states system_state __read_mostly;
EXPORT_SYMBOL(system_state);
/*
* at least once to get things moving:
*/
init_idle_bootup_task(current);
+ rcu_scheduler_starting();
preempt_enable_no_resched();
schedule();
preempt_disable();
void rcu_check_callbacks(int cpu, int user)
{
if (user ||
- (idle_cpu(cpu) && !in_softirq() &&
- hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
+ (idle_cpu(cpu) && rcu_scheduler_active &&
+ !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
/*
* Get here if this CPU took its interrupt from user
#include <linux/cpu.h>
#include <linux/mutex.h>
#include <linux/module.h>
+#include <linux/kernel_stat.h>
enum rcu_barrier {
RCU_BARRIER_STD,
static atomic_t rcu_barrier_cpu_count;
static DEFINE_MUTEX(rcu_barrier_mutex);
static struct completion rcu_barrier_completion;
+int rcu_scheduler_active __read_mostly;
/*
* Awaken the corresponding synchronize_rcu() instance now that a
void synchronize_rcu(void)
{
struct rcu_synchronize rcu;
+
+ if (rcu_blocking_is_gp())
+ return;
+
init_completion(&rcu.completion);
/* Will wake me after RCU finished. */
call_rcu(&rcu.head, wakeme_after_rcu);
__rcu_init();
}
+void rcu_scheduler_starting(void)
+{
+ WARN_ON(num_online_cpus() != 1);
+ WARN_ON(nr_context_switches() > 0);
+ rcu_scheduler_active = 1;
+}
{
struct rcu_synchronize rcu;
+ if (num_online_cpus() == 1)
+ return; /* blocking is gp if only one CPU! */
+
init_completion(&rcu.completion);
/* Will wake me after RCU finished. */
call_rcu_sched(&rcu.head, wakeme_after_rcu);
void rcu_check_callbacks(int cpu, int user)
{
if (user ||
- (idle_cpu(cpu) && !in_softirq() &&
- hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
+ (idle_cpu(cpu) && rcu_scheduler_active &&
+ !in_softirq() && hardirq_count() <= (1 << HARDIRQ_SHIFT))) {
/*
* Get here if this CPU took its interrupt from user
{
ktime_t now;
- if (rt_bandwidth_enabled() && rt_b->rt_runtime == RUNTIME_INF)
+ if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
return;
if (hrtimer_active(&rt_b->rt_period_timer))
return ret;
}
+
+int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+{
+ /* Don't accept realtime tasks when there is no way for them to run */
+ if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
+ return 0;
+
+ return 1;
+}
+
#else /* !CONFIG_RT_GROUP_SCHED */
static int sched_rt_global_constraints(void)
{
struct task_struct *tsk)
{
#ifdef CONFIG_RT_GROUP_SCHED
- /* Don't accept realtime tasks when there is no way for them to run */
- if (rt_task(tsk) && cgroup_tg(cgrp)->rt_bandwidth.rt_runtime == 0)
+ if (!sched_rt_can_attach(cgroup_tg(cgrp), tsk))
return -EINVAL;
#else
/* We don't support RT-tasks being in separate groups */
if (sched_clock_stable)
return sched_clock();
+ scd = cpu_sdc(cpu);
+
/*
* Normally this is not called in NMI context - but if it is,
* trying to do any locking here is totally lethal.
if (unlikely(!sched_clock_running))
return 0ull;
- scd = cpu_sdc(cpu);
WARN_ON_ONCE(!irqs_disabled());
now = sched_clock();
#include <linux/seccomp.h>
#include <linux/sched.h>
+#include <linux/compat.h>
/* #define SECCOMP_DEBUG 1 */
#define NR_SECCOMP_MODES 1
0, /* null terminated */
};
-#ifdef TIF_32BIT
+#ifdef CONFIG_COMPAT
static int mode1_syscalls_32[] = {
__NR_seccomp_read_32, __NR_seccomp_write_32, __NR_seccomp_exit_32, __NR_seccomp_sigreturn_32,
0, /* null terminated */
switch (mode) {
case 1:
syscall = mode1_syscalls;
-#ifdef TIF_32BIT
- if (test_thread_flag(TIF_32BIT))
+#ifdef CONFIG_COMPAT
+ if (is_compat_task())
syscall = mode1_syscalls_32;
#endif
do {
abort_creds(new);
return retval;
}
-
+
/*
* change the user struct in a credentials set to match the new UID
*/
if (!new_user)
return -EAGAIN;
+ if (!task_can_switch_user(new_user, current)) {
+ free_uid(new_user);
+ return -EINVAL;
+ }
+
if (atomic_read(&new_user->processes) >=
current->signal->rlim[RLIMIT_NPROC].rlim_cur &&
new_user != INIT_USER) {
goto error;
}
- retval = -EAGAIN;
- if (new->uid != old->uid && set_user(new) < 0)
- goto error;
-
+ if (new->uid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
+ }
if (ruid != (uid_t) -1 ||
(euid != (uid_t) -1 && euid != old->uid))
new->suid = new->euid;
retval = -EPERM;
if (capable(CAP_SETUID)) {
new->suid = new->uid = uid;
- if (uid != old->uid && set_user(new) < 0) {
- retval = -EAGAIN;
- goto error;
+ if (uid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
}
} else if (uid != old->uid && uid != new->suid) {
goto error;
goto error;
}
- retval = -EAGAIN;
if (ruid != (uid_t) -1) {
new->uid = ruid;
- if (ruid != old->uid && set_user(new) < 0)
- goto error;
+ if (ruid != old->uid) {
+ retval = set_user(new);
+ if (retval < 0)
+ goto error;
+ }
}
if (euid != (uid_t) -1)
new->euid = euid;
#endif
+#if defined(CONFIG_RT_GROUP_SCHED) && defined(CONFIG_USER_SCHED)
+/*
+ * We need to check if a setuid can take place. This function should be called
+ * before successfully completing the setuid.
+ */
+int task_can_switch_user(struct user_struct *up, struct task_struct *tsk)
+{
+
+ return sched_rt_can_attach(up->tg, tsk);
+
+}
+#else
+int task_can_switch_user(struct user_struct *up, struct task_struct *tsk)
+{
+ return 1;
+}
+#endif
+
/*
* Locate the user_struct for the passed UID. If found, take a ref on it. The
* caller must undo that ref with free_uid().
return 0;
}
-void free_user_ns(struct kref *kref)
+/*
+ * Deferred destructor for a user namespace. This is required because
+ * free_user_ns() may be called with uidhash_lock held, but we need to call
+ * back to free_uid() which will want to take the lock again.
+ */
+static void free_user_ns_work(struct work_struct *work)
{
- struct user_namespace *ns;
-
- ns = container_of(kref, struct user_namespace, kref);
+ struct user_namespace *ns =
+ container_of(work, struct user_namespace, destroyer);
free_uid(ns->creator);
kfree(ns);
}
+
+void free_user_ns(struct kref *kref)
+{
+ struct user_namespace *ns =
+ container_of(kref, struct user_namespace, kref);
+
+ INIT_WORK(&ns->destroyer, free_user_ns_work);
+ schedule_work(&ns->destroyer);
+}
EXPORT_SYMBOL(free_user_ns);
*/
static inline int shmem_acct_size(unsigned long flags, loff_t size)
{
- return (flags & VM_ACCOUNT) ?
- security_vm_enough_memory_kern(VM_ACCT(size)) : 0;
+ return (flags & VM_NORESERVE) ?
+ 0 : security_vm_enough_memory_kern(VM_ACCT(size));
}
static inline void shmem_unacct_size(unsigned long flags, loff_t size)
{
- if (flags & VM_ACCOUNT)
+ if (!(flags & VM_NORESERVE))
vm_unacct_memory(VM_ACCT(size));
}
*/
static inline int shmem_acct_block(unsigned long flags)
{
- return (flags & VM_ACCOUNT) ?
- 0 : security_vm_enough_memory_kern(VM_ACCT(PAGE_CACHE_SIZE));
+ return (flags & VM_NORESERVE) ?
+ security_vm_enough_memory_kern(VM_ACCT(PAGE_CACHE_SIZE)) : 0;
}
static inline void shmem_unacct_blocks(unsigned long flags, long pages)
{
- if (!(flags & VM_ACCOUNT))
+ if (flags & VM_NORESERVE)
vm_unacct_memory(pages * VM_ACCT(PAGE_CACHE_SIZE));
}
return 0;
}
-static struct inode *
-shmem_get_inode(struct super_block *sb, int mode, dev_t dev)
+static struct inode *shmem_get_inode(struct super_block *sb, int mode,
+ dev_t dev, unsigned long flags)
{
struct inode *inode;
struct shmem_inode_info *info;
info = SHMEM_I(inode);
memset(info, 0, (char *)inode - (char *)info);
spin_lock_init(&info->lock);
+ info->flags = flags & VM_NORESERVE;
INIT_LIST_HEAD(&info->swaplist);
switch (mode & S_IFMT) {
static int
shmem_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)
{
- struct inode *inode = shmem_get_inode(dir->i_sb, mode, dev);
+ struct inode *inode;
int error = -ENOSPC;
+ inode = shmem_get_inode(dir->i_sb, mode, dev, VM_NORESERVE);
if (inode) {
error = security_inode_init_security(inode, dir, NULL, NULL,
NULL);
if (len > PAGE_CACHE_SIZE)
return -ENAMETOOLONG;
- inode = shmem_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0);
+ inode = shmem_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0, VM_NORESERVE);
if (!inode)
return -ENOSPC;
sb->s_flags |= MS_POSIXACL;
#endif
- inode = shmem_get_inode(sb, S_IFDIR | sbinfo->mode, 0);
+ inode = shmem_get_inode(sb, S_IFDIR | sbinfo->mode, 0, VM_NORESERVE);
if (!inode)
goto failed;
inode->i_uid = sbinfo->uid;
return 0;
}
-#define shmem_file_operations ramfs_file_operations
-#define shmem_vm_ops generic_file_vm_ops
-#define shmem_get_inode ramfs_get_inode
-#define shmem_acct_size(a, b) 0
-#define shmem_unacct_size(a, b) do {} while (0)
-#define SHMEM_MAX_BYTES LLONG_MAX
+#define shmem_vm_ops generic_file_vm_ops
+#define shmem_file_operations ramfs_file_operations
+#define shmem_get_inode(sb, mode, dev, flags) ramfs_get_inode(sb, mode, dev)
+#define shmem_acct_size(flags, size) 0
+#define shmem_unacct_size(flags, size) do {} while (0)
+#define SHMEM_MAX_BYTES LLONG_MAX
#endif /* CONFIG_SHMEM */
* shmem_file_setup - get an unlinked file living in tmpfs
* @name: name for dentry (to be seen in /proc/<pid>/maps
* @size: size to be set for the file
- * @flags: vm_flags
+ * @flags: VM_NORESERVE suppresses pre-accounting of the entire object size
*/
struct file *shmem_file_setup(char *name, loff_t size, unsigned long flags)
{
goto put_dentry;
error = -ENOSPC;
- inode = shmem_get_inode(root->d_sb, S_IFREG | S_IRWXUGO, 0);
+ inode = shmem_get_inode(root->d_sb, S_IFREG | S_IRWXUGO, 0, flags);
if (!inode)
goto close_file;
-#ifdef CONFIG_SHMEM
- SHMEM_I(inode)->flags = (flags & VM_NORESERVE) ? 0 : VM_ACCOUNT;
-#endif
d_instantiate(dentry, inode);
inode->i_size = size;
inode->i_nlink = 0; /* It is unlinked */
unsigned long addr;
int purged = 0;
+ BUG_ON(!size);
BUG_ON(size & ~PAGE_MASK);
va = kmalloc_node(sizeof(struct vmap_area),
addr = ALIGN(vstart, align);
spin_lock(&vmap_area_lock);
+ if (addr + size - 1 < addr)
+ goto overflow;
+
/* XXX: could have a last_hole cache */
n = vmap_area_root.rb_node;
if (n) {
while (addr + size > first->va_start && addr + size <= vend) {
addr = ALIGN(first->va_end + PAGE_SIZE, align);
+ if (addr + size - 1 < addr)
+ goto overflow;
n = rb_next(&first->rb_node);
if (n)
}
found:
if (addr + size > vend) {
+overflow:
spin_unlock(&vmap_area_lock);
if (!purged) {
purge_vmap_area_lazy();
static DEFINE_SPINLOCK(purge_lock);
LIST_HEAD(valist);
struct vmap_area *va;
+ struct vmap_area *n_va;
int nr = 0;
/*
if (nr) {
spin_lock(&vmap_area_lock);
- list_for_each_entry(va, &valist, purge_list)
+ list_for_each_entry_safe(va, n_va, &valist, purge_list)
__free_vmap_area(va);
spin_unlock(&vmap_area_lock);
}
#include <linux/skbuff.h>
#include <linux/netdevice.h>
#include <linux/if_vlan.h>
+#include <linux/netpoll.h>
#include "vlan.h"
/* VLAN rx hw acceleration helper. This acts like netif_{rx,receive_skb}(). */
int __vlan_hwaccel_rx(struct sk_buff *skb, struct vlan_group *grp,
u16 vlan_tci, int polling)
{
+ if (netpoll_rx(skb))
+ return NET_RX_DROP;
+
if (skb_bond_should_drop(skb))
goto drop;
{
int err = NET_RX_SUCCESS;
+ if (netpoll_receive_skb(skb))
+ return NET_RX_DROP;
+
switch (vlan_gro_common(napi, grp, vlan_tci, skb)) {
case -1:
return netif_receive_skb(skb);
if (!skb)
goto out;
+ if (netpoll_receive_skb(skb))
+ goto out;
+
err = NET_RX_SUCCESS;
switch (vlan_gro_common(napi, grp, vlan_tci, skb)) {
int napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
{
+ if (netpoll_receive_skb(skb))
+ return NET_RX_DROP;
+
switch (__napi_gro_receive(napi, skb)) {
case -1:
return netif_receive_skb(skb);
if (!skb)
goto out;
+ if (netpoll_receive_skb(skb))
+ goto out;
+
err = NET_RX_SUCCESS;
switch (__napi_gro_receive(napi, skb)) {
static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
struct tcp_sacktag_state *state,
- unsigned int pcount, int shifted, int mss)
+ unsigned int pcount, int shifted, int mss,
+ int dup_sack)
{
struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *prev = tcp_write_queue_prev(sk, skb);
}
/* We discard results */
- tcp_sacktag_one(skb, sk, state, 0, pcount);
+ tcp_sacktag_one(skb, sk, state, dup_sack, pcount);
/* Difference in this won't matter, both ACKed by the same cumul. ACK */
TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS);
if (!skb_shift(prev, skb, len))
goto fallback;
- if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss))
+ if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss, dup_sack))
goto out;
/* Hole filled allows collapsing with the next as well, this is very
len = skb->len;
if (skb_shift(prev, skb, len)) {
pcount += tcp_skb_pcount(skb);
- tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss);
+ tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss, 0);
}
out:
/* Tom Kelly's Scalable TCP
*
- * See htt://www-lce.eng.cam.ac.uk/~ctk21/scalable/
+ * See http://www.deneholme.net/tom/scalable/
*
* John Heffner <jheffner@sc.edu>
*/
if (twp != NULL) {
*twp = tw;
- NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
} else if (tw != NULL) {
/* Silly. Should hash-dance instead... */
inet_twsk_deschedule(tw, death_row);
- NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
inet_twsk_put(tw);
}
if (net->ct.sysctl_checksum && hooknum == NF_INET_PRE_ROUTING &&
nf_ip6_checksum(skb, hooknum, dataoff, IPPROTO_ICMPV6)) {
- nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL,
- "nf_ct_icmpv6: ICMPv6 checksum failed\n");
+ if (LOG_INVALID(net, IPPROTO_ICMPV6))
+ nf_log_packet(PF_INET6, 0, skb, NULL, NULL, NULL,
+ "nf_ct_icmpv6: ICMPv6 checksum failed ");
return -NF_ACCEPT;
}
#endif
#define NFULNL_NLBUFSIZ_DEFAULT NLMSG_GOODSIZE
-#define NFULNL_TIMEOUT_DEFAULT HZ /* every second */
+#define NFULNL_TIMEOUT_DEFAULT 100 /* every second */
#define NFULNL_QTHRESH_DEFAULT 100 /* 100 packets */
#define NFULNL_COPY_RANGE_MAX 0xFFFF /* max packet size is limited by 16-bit struct nfattr nfa_len field */
qthreshold = inst->qthreshold;
/* per-rule qthreshold overrides per-instance */
- if (qthreshold > li->u.ulog.qthreshold)
- qthreshold = li->u.ulog.qthreshold;
+ if (li->u.ulog.qthreshold)
+ if (qthreshold > li->u.ulog.qthreshold)
+ qthreshold = li->u.ulog.qthreshold;
+
switch (inst->copy_mode) {
case NFULNL_COPY_META:
.release = seq_release_net,
};
-static void *xt_match_seq_start(struct seq_file *seq, loff_t *pos)
+/*
+ * Traverse state for ip{,6}_{tables,matches} for helping crossing
+ * the multi-AF mutexes.
+ */
+struct nf_mttg_trav {
+ struct list_head *head, *curr;
+ uint8_t class, nfproto;
+};
+
+enum {
+ MTTG_TRAV_INIT,
+ MTTG_TRAV_NFP_UNSPEC,
+ MTTG_TRAV_NFP_SPEC,
+ MTTG_TRAV_DONE,
+};
+
+static void *xt_mttg_seq_next(struct seq_file *seq, void *v, loff_t *ppos,
+ bool is_target)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ static const uint8_t next_class[] = {
+ [MTTG_TRAV_NFP_UNSPEC] = MTTG_TRAV_NFP_SPEC,
+ [MTTG_TRAV_NFP_SPEC] = MTTG_TRAV_DONE,
+ };
+ struct nf_mttg_trav *trav = seq->private;
+
+ switch (trav->class) {
+ case MTTG_TRAV_INIT:
+ trav->class = MTTG_TRAV_NFP_UNSPEC;
+ mutex_lock(&xt[NFPROTO_UNSPEC].mutex);
+ trav->head = trav->curr = is_target ?
+ &xt[NFPROTO_UNSPEC].target : &xt[NFPROTO_UNSPEC].match;
+ break;
+ case MTTG_TRAV_NFP_UNSPEC:
+ trav->curr = trav->curr->next;
+ if (trav->curr != trav->head)
+ break;
+ mutex_unlock(&xt[NFPROTO_UNSPEC].mutex);
+ mutex_lock(&xt[trav->nfproto].mutex);
+ trav->head = trav->curr = is_target ?
+ &xt[trav->nfproto].target : &xt[trav->nfproto].match;
+ trav->class = next_class[trav->class];
+ break;
+ case MTTG_TRAV_NFP_SPEC:
+ trav->curr = trav->curr->next;
+ if (trav->curr != trav->head)
+ break;
+ /* fallthru, _stop will unlock */
+ default:
+ return NULL;
+ }
- mutex_lock(&xt[af].mutex);
- return seq_list_start(&xt[af].match, *pos);
+ if (ppos != NULL)
+ ++*ppos;
+ return trav;
}
-static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+static void *xt_mttg_seq_start(struct seq_file *seq, loff_t *pos,
+ bool is_target)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ struct nf_mttg_trav *trav = seq->private;
+ unsigned int j;
- return seq_list_next(v, &xt[af].match, pos);
+ trav->class = MTTG_TRAV_INIT;
+ for (j = 0; j < *pos; ++j)
+ if (xt_mttg_seq_next(seq, NULL, NULL, is_target) == NULL)
+ return NULL;
+ return trav;
}
-static void xt_match_seq_stop(struct seq_file *seq, void *v)
+static void xt_mttg_seq_stop(struct seq_file *seq, void *v)
{
- struct proc_dir_entry *pde = seq->private;
- u_int16_t af = (unsigned long)pde->data;
+ struct nf_mttg_trav *trav = seq->private;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ mutex_unlock(&xt[NFPROTO_UNSPEC].mutex);
+ break;
+ case MTTG_TRAV_NFP_SPEC:
+ mutex_unlock(&xt[trav->nfproto].mutex);
+ break;
+ }
+}
- mutex_unlock(&xt[af].mutex);
+static void *xt_match_seq_start(struct seq_file *seq, loff_t *pos)
+{
+ return xt_mttg_seq_start(seq, pos, false);
}
-static int xt_match_seq_show(struct seq_file *seq, void *v)
+static void *xt_match_seq_next(struct seq_file *seq, void *v, loff_t *ppos)
{
- struct xt_match *match = list_entry(v, struct xt_match, list);
+ return xt_mttg_seq_next(seq, v, ppos, false);
+}
- if (strlen(match->name))
- return seq_printf(seq, "%s\n", match->name);
- else
- return 0;
+static int xt_match_seq_show(struct seq_file *seq, void *v)
+{
+ const struct nf_mttg_trav *trav = seq->private;
+ const struct xt_match *match;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ case MTTG_TRAV_NFP_SPEC:
+ if (trav->curr == trav->head)
+ return 0;
+ match = list_entry(trav->curr, struct xt_match, list);
+ return (*match->name == '\0') ? 0 :
+ seq_printf(seq, "%s\n", match->name);
+ }
+ return 0;
}
static const struct seq_operations xt_match_seq_ops = {
.start = xt_match_seq_start,
.next = xt_match_seq_next,
- .stop = xt_match_seq_stop,
+ .stop = xt_mttg_seq_stop,
.show = xt_match_seq_show,
};
static int xt_match_open(struct inode *inode, struct file *file)
{
+ struct seq_file *seq;
+ struct nf_mttg_trav *trav;
int ret;
- ret = seq_open(file, &xt_match_seq_ops);
- if (!ret) {
- struct seq_file *seq = file->private_data;
+ trav = kmalloc(sizeof(*trav), GFP_KERNEL);
+ if (trav == NULL)
+ return -ENOMEM;
- seq->private = PDE(inode);
+ ret = seq_open(file, &xt_match_seq_ops);
+ if (ret < 0) {
+ kfree(trav);
+ return ret;
}
- return ret;
+
+ seq = file->private_data;
+ seq->private = trav;
+ trav->nfproto = (unsigned long)PDE(inode)->data;
+ return 0;
}
static const struct file_operations xt_match_ops = {
.open = xt_match_open,
.read = seq_read,
.llseek = seq_lseek,
- .release = seq_release,
+ .release = seq_release_private,
};
static void *xt_target_seq_start(struct seq_file *seq, loff_t *pos)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- mutex_lock(&xt[af].mutex);
- return seq_list_start(&xt[af].target, *pos);
+ return xt_mttg_seq_start(seq, pos, true);
}
-static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+static void *xt_target_seq_next(struct seq_file *seq, void *v, loff_t *ppos)
{
- struct proc_dir_entry *pde = (struct proc_dir_entry *)seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- return seq_list_next(v, &xt[af].target, pos);
-}
-
-static void xt_target_seq_stop(struct seq_file *seq, void *v)
-{
- struct proc_dir_entry *pde = seq->private;
- u_int16_t af = (unsigned long)pde->data;
-
- mutex_unlock(&xt[af].mutex);
+ return xt_mttg_seq_next(seq, v, ppos, true);
}
static int xt_target_seq_show(struct seq_file *seq, void *v)
{
- struct xt_target *target = list_entry(v, struct xt_target, list);
-
- if (strlen(target->name))
- return seq_printf(seq, "%s\n", target->name);
- else
- return 0;
+ const struct nf_mttg_trav *trav = seq->private;
+ const struct xt_target *target;
+
+ switch (trav->class) {
+ case MTTG_TRAV_NFP_UNSPEC:
+ case MTTG_TRAV_NFP_SPEC:
+ if (trav->curr == trav->head)
+ return 0;
+ target = list_entry(trav->curr, struct xt_target, list);
+ return (*target->name == '\0') ? 0 :
+ seq_printf(seq, "%s\n", target->name);
+ }
+ return 0;
}
static const struct seq_operations xt_target_seq_ops = {
.start = xt_target_seq_start,
.next = xt_target_seq_next,
- .stop = xt_target_seq_stop,
+ .stop = xt_mttg_seq_stop,
.show = xt_target_seq_show,
};
static int xt_target_open(struct inode *inode, struct file *file)
{
+ struct seq_file *seq;
+ struct nf_mttg_trav *trav;
int ret;
- ret = seq_open(file, &xt_target_seq_ops);
- if (!ret) {
- struct seq_file *seq = file->private_data;
+ trav = kmalloc(sizeof(*trav), GFP_KERNEL);
+ if (trav == NULL)
+ return -ENOMEM;
- seq->private = PDE(inode);
+ ret = seq_open(file, &xt_target_seq_ops);
+ if (ret < 0) {
+ kfree(trav);
+ return ret;
}
- return ret;
+
+ seq = file->private_data;
+ seq->private = trav;
+ trav->nfproto = (unsigned long)PDE(inode)->data;
+ return 0;
}
static const struct file_operations xt_target_ops = {
.open = xt_target_open,
.read = seq_read,
.llseek = seq_lseek,
- .release = seq_release,
+ .release = seq_release_private,
};
#define FORMAT_TABLES "_tables_names"
struct recent_entry *e;
char buf[sizeof("+b335:1d35:1e55:dead:c0de:1715:5afe:c0de")];
const char *c = buf;
- union nf_inet_addr addr;
+ union nf_inet_addr addr = {};
u_int16_t family;
bool add, succ;
{
struct drr_sched *q = qdisc_priv(sch);
struct drr_class *cl = (struct drr_class *)*arg;
+ struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_DRR_MAX + 1];
u32 quantum;
int err;
- err = nla_parse_nested(tb, TCA_DRR_MAX, tca[TCA_OPTIONS], drr_policy);
+ if (!opt)
+ return -EINVAL;
+
+ err = nla_parse_nested(tb, TCA_DRR_MAX, opt, drr_policy);
if (err < 0)
return err;
my $P = $0;
$P =~ s@.*/@@g;
-my $V = '0.27';
+my $V = '0.28';
use Getopt::Long qw(:config no_auto_abbrev);
__iomem|
__must_check|
__init_refok|
- __kprobes
+ __kprobes|
+ __ref
}x;
our $Attribute = qr{
const|
$realfile =~ s@^([^/]*)/@@;
$p1_prefix = $1;
- if ($tree && $p1_prefix ne '' && -e "$root/$p1_prefix") {
+ if (!$file && $tree && $p1_prefix ne '' &&
+ -e "$root/$p1_prefix") {
WARN("patch prefix '$p1_prefix' exists, appears to be a -p0 patch\n");
}
}
# TEST: allow direct testing of the attribute matcher.
if ($dbg_attr) {
- if ($line =~ /^.\s*$Attribute\s*$/) {
+ if ($line =~ /^.\s*$Modifier\s*$/) {
ERROR("TEST: is attr\n" . $herecurr);
- } elsif ($dbg_attr > 1 && $line =~ /^.+($Attribute)/) {
+ } elsif ($dbg_attr > 1 && $line =~ /^.+($Modifier)/) {
ERROR("TEST: is not attr ($1 is)\n". $herecurr);
}
next;
# * goes on variable not on type
# (char*[ const])
- if ($line =~ m{\($NonptrType(\s*\*[\s\*]*(?:$Modifier\s*)*)\)}) {
+ if ($line =~ m{\($NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)\)}) {
my ($from, $to) = ($1, $1);
# Should start with a space.
if ($from ne $to) {
ERROR("\"(foo$from)\" should be \"(foo$to)\"\n" . $herecurr);
}
- } elsif ($line =~ m{\b$NonptrType(\s*\*[\s\*]*(?:$Modifier\s*)?)($Ident)}) {
+ } elsif ($line =~ m{\b$NonptrType(\s*(?:$Modifier\b\s*|\*\s*)+)($Ident)}) {
my ($from, $to, $ident) = ($1, $1, $2);
# Should start with a space.
# Modifiers should have spaces.
$to =~ s/(\b$Modifier$)/$1 /;
- #print "from<$from> to<$to>\n";
- if ($from ne $to) {
+ #print "from<$from> to<$to> ident<$ident>\n";
+ if ($from ne $to && $ident !~ /^$Modifier$/) {
ERROR("\"foo${from}bar\" should be \"foo${to}bar\"\n" . $herecurr);
}
}
if ($ctx !~ /[WEBC]x./ && $ca !~ /(?:\)|!|~|\*|-|\&|\||\+\+|\-\-|\{)$/) {
ERROR("space required before that '$op' $at\n" . $hereptr);
}
- if ($op eq '*' && $cc =~/\s*const\b/) {
+ if ($op eq '*' && $cc =~/\s*$Modifier\b/) {
# A unary '*' may be const
} elsif ($ctx =~ /.xW/) {
- ERROR("space prohibited after that '$op' $at\n" . $hereptr);
+ ERROR("Aspace prohibited after that '$op' $at\n" . $hereptr);
}
# unary ++ and unary -- are allowed no space on one side.
if ($line =~ /\bin_atomic\s*\(/) {
if ($realfile =~ m@^drivers/@) {
ERROR("do not use in_atomic in drivers\n" . $herecurr);
- } else {
+ } elsif ($realfile !~ m@^kernel/@) {
WARN("use of in_atomic() is incorrect outside core kernel code\n" . $herecurr);
}
}
if (!S_ISSOCK(inode->i_mode) ||
((mask & (MAY_WRITE | MAY_APPEND)) == 0))
return 0;
-
sock = SOCKET_I(inode);
sk = sock->sk;
+ if (sk == NULL)
+ return 0;
sksec = sk->sk_security;
- if (sksec->nlbl_state != NLBL_REQUIRE)
+ if (sksec == NULL || sksec->nlbl_state != NLBL_REQUIRE)
return 0;
local_bh_disable();
while (dst_frames1 > 0) {
S1 = S2;
if (src_frames1-- > 0) {
- S1 = *src;
+ S2 = *src;
src += src_step;
}
if (pos & ~R_MASK) {
MODULE_PARM_DESC(enable, "Enable Audiowerk2 soundcard.");
static struct pci_device_id snd_aw2_ids[] = {
- {PCI_VENDOR_ID_SAA7146, PCI_DEVICE_ID_SAA7146, PCI_ANY_ID, PCI_ANY_ID,
+ {PCI_VENDOR_ID_SAA7146, PCI_DEVICE_ID_SAA7146, 0, 0,
0, 0, 0},
{0}
};
.ca0151_chip = 1,
.spk71 = 1,
.spdif_bug = 1,
+ .invert_shared_spdif = 1, /* digital/analog switch swapped */
.ac97_chip = 1} ,
{.vendor = 0x1102, .device = 0x0004, .subsystem = 0x10021102,
.driver = "Audigy2", .name = "SB Audigy 2 Platinum [SB0240P]",
{
struct snd_hwdep *hwdep = dev_get_drvdata(dev);
struct hda_codec *codec = hwdep->private_data;
- char *p;
- struct hda_verb verb, *v;
+ struct hda_verb *v;
+ int nid, verb, param;
- verb.nid = simple_strtoul(buf, &p, 0);
- verb.verb = simple_strtoul(p, &p, 0);
- verb.param = simple_strtoul(p, &p, 0);
- if (!verb.nid || !verb.verb || !verb.param)
+ if (sscanf(buf, "%i %i %i", &nid, &verb, ¶m) != 3)
+ return -EINVAL;
+ if (!nid || !verb)
return -EINVAL;
v = snd_array_new(&codec->init_verbs);
if (!v)
return -ENOMEM;
- *v = verb;
+ v->nid = nid;
+ v->verb = verb;
+ v->param = param;
return count;
}
SND_PCI_QUIRK(0x1028, 0x20ac, "Dell Studio Desktop", 0x01),
/* including bogus ALC268 in slot#2 that conflicts with ALC888 */
SND_PCI_QUIRK(0x17c0, 0x4085, "Medion MD96630", 0x01),
+ /* conflict of ALC268 in slot#3 (digital I/O); a temporary fix */
+ SND_PCI_QUIRK(0x1179, 0xff00, "Toshiba laptop", 0x03),
{}
};
case 0x106b3e00: /* iMac 24 Aluminium */
board_config = ALC885_IMAC24;
break;
+ case 0x106b00a0: /* MacBookPro3,1 - Another revision */
case 0x106b00a1: /* Macbook (might be wrong - PCI SSID?) */
case 0x106b00a4: /* MacbookPro4,1 */
case 0x106b2c00: /* Macbook Pro rev3 */
ALC888_ACER_ASPIRE_4930G),
SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
ALC888_ACER_ASPIRE_4930G),
+ SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
+ ALC888_ACER_ASPIRE_4930G),
SND_PCI_QUIRK(0x1025, 0, "Acer laptop", ALC883_ACER), /* default Acer */
SND_PCI_QUIRK(0x1028, 0x020d, "Dell Inspiron 530", ALC888_6ST_DELL),
SND_PCI_QUIRK(0x103c, 0x2a3d, "HP Pavillion", ALC883_6ST_DIG),
SND_PCI_QUIRK(0x103c, 0x1309, "HP xw4*00", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x130a, "HP xw6*00", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x130b, "HP xw8*00", ALC262_HP_BPC),
+ SND_PCI_QUIRK(0x103c, 0x170b, "HP xw*", ALC262_HP_BPC),
SND_PCI_QUIRK(0x103c, 0x2800, "HP D7000", ALC262_HP_BPC_D7000_WL),
SND_PCI_QUIRK(0x103c, 0x2801, "HP D7000", ALC262_HP_BPC_D7000_WF),
SND_PCI_QUIRK(0x103c, 0x2802, "HP D7000", ALC262_HP_BPC_D7000_WL),
case STAC_DELL_M4_3:
spec->num_dmics = 1;
spec->num_smuxes = 0;
- spec->num_dmuxes = 0;
+ spec->num_dmuxes = 1;
break;
default:
spec->num_dmics = STAC92HD71BXX_NUM_DMICS;
int capture_chips;
int fw_file_set;
int firmware_num;
- int is_hr_stereo:1;
- int board_has_aes1:1; /* if 1 board has AES1 plug and SRC */
- int board_has_analog:1; /* if 0 the board is digital only */
- int board_has_mic:1; /* if 1 the board has microphone input */
- int board_aes_in_192k:1;/* if 1 the aes input plugs do support 192kHz */
- int mono_capture:1; /* if 1 the board does mono capture */
+ unsigned int is_hr_stereo:1;
+ unsigned int board_has_aes1:1; /* if 1 board has AES1 plug and SRC */
+ unsigned int board_has_analog:1; /* if 0 the board is digital only */
+ unsigned int board_has_mic:1; /* if 1 the board has microphone input */
+ unsigned int board_aes_in_192k:1;/* if 1 the aes input plugs do support 192kHz */
+ unsigned int mono_capture:1; /* if 1 the board does mono capture */
struct snd_dma_buffer hostport;