2019独角兽企业重金招聘Python工程师标准>>>
/*
*2014/1/22 新增
* 建议安装最新版本的系统,
*http://my.oschina.net/startphp/blog/138382 查看一下硬件是否支持
* 不行就用USB 的外置的吧
*/
方便阅读
文章链接
问题
解决之道
解决链接
linux拓展
很多我都不懂 越找越深,但解决办法很简单。
http://www.broadcom.com/support/802.11/linux_sta.php 驱动下载地址
http://www.broadcom.com/docs/linux_sta/README.txt 操作说明
主要参考 文章
http://blog.chinaunix.net/uid-25385953-id-336978.html
http://www.blogjava.net/leisure/archive/2011/09/13/358496.html
http://my.oschina.net/kursk/blog/7896
(其中
http://www.broadcom.com/docs/linux_sta/5_100_82_38.patch 这个布丁链接均过期
)
问题:
主要问题:第一个是 要求安装补丁 就是 一个是 asm/system.h文件 缺失 /tmp/hybrid_wl/src/wl/sys/wl_linux.c:389:2: 错误: 初始值设定项里有未知的字段‘ndo_set_multicast_list’ https://wiki.ubuntu.com/KernelTeam/ReleaseStatus/Quantal 里给出的bug报告 Fixed in Quantal, but want it SRU'd for Precise as it will affect anyone running a 12.10 kernel in 12.04 994255 - bcmwl-kernel-source 5.100.82.38+bdcom-0ubuntu6.1: bcmwl kernel module failed to build [fatal error: asm/system.h: No such file or directory] Fixed in Quantal, but want it SRU'd for Precise as it will affect anyone running a 12.10 kernel in 12.04
解决
解决办法:很简单 hybrid_wl/src/wl/sys/ wl_linux.c 这个文件 删掉 #include替换 .ndo_set_rx_mode = wl_set_multicast_list, 然后 make成功了
我感觉dell的这电脑问题很大。win8上网卡就总又问题,装了ubuntu 才好些。不过下面有多的东西,如果有时间可以看一下。
参考:
https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers/+bug/993506 ubuntu的解释
http://book.51cto.com/art/201007/213543.htm 内核目录
http://blog.csdn.net/xiongyaoqiongyao/article/details/8258829 头文件解释
http://forums.fedoraforum.org/archive/index.php/t-280821.html fedora的参考
http://www.mindwerks.net/2012/06/wireless-bcm4312-with-the-3-4-and-3-5-kernel/ 给出了解决办法
https://lkml.org/lkml/2012/3/29/360 编写system.h 文件 我不懂 如果有人明白希望讲解一下。
http://www.linuxdiyf.com/viewarticle.php?id=194346 这里有system.h 的源文件 如过自己会编译的花可以参考一下。
ftp://202.38.93.94/Gentoo/eta/usr/include/ 系统文件 不完整
ndo_set_multicast_list --报错处理方法:http://www.mindwerks.net/2011/11/wireless-bcm4312-3-2-kernel/
————————————————————
#include
#include
#include
#include
#include
#include
//包含了struct inode 的定义,MINOR、MAJOR的头文件。
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
头文件主目录include
头文件目录中总共有32个.h头文件。其中主目录下有13个,asm子目录中有4个,linux子目录中有10个,sys子目录中有5个。这些头文件各自的功能如下,具体的作用和所包含的信息请参见第14章。
(1)体系结构相关头文件子目录include/asm
这些头文件主要定义了一些与CPU体系结构密切相关的数据结构、宏函数和变量。共4个文件。
(2)Linux内核专用头文件子目录include/linux
以及一些有关描述符参数设置和获取的嵌入式汇编函数宏语句。
(3)系统专用数据结构子目录include/sys
————————————————————
浏览内核代码之前,有必要知道内核源码的整体分布情况,按照惯例,内核代码安装在/usr/src/linux目录下,该目录下的每一个子目录都代表了一个特定的内核功能性子集,下面针对2.6.23版本进行简单描述。
(1)Documentation。
这个目录下面没有内核代码,只有很多质量参差不齐的文档,但往往能够给我们提供很多的帮助。
(2)arch。
所有与体系结构相关的代码都在这个目录以及include/asm-*/目录中,Linux支持的每种体系结构在arch目录下都有对应的子目录,而在每个体系结构特有的子目录下又至少包含3个子目录。
kernel:存放支持体系结构特有的诸如信号量处理和SMP之类特征的实现。
lib:存放体系结构特有的对诸如strlen和memcpy之类的通用函数的实现。
mm:存放体系结构特有的内存管理程序的实现。
除了这3个子目录之外,大多数体系结构在必要的情况下还有一个boot子目录,包含了在这种硬件平台上启动内核所使用的部分或全部平台特有代码。
此外,大部分体系结构所特有的子目录还根据需要包含了供附加特性使用的其他子目录。比如,i386目录包含一个math-emu子目录,其中包括了在缺少数学协处理器(FPU)的CPU上运行模拟FPU的代码。
(3)drivers。
这个目录是内核中最庞大的一个目录,显卡、网卡、SCSI适配器、PCI总线、USB总线和其他任何Linux支持的外围设备或总线的驱动程序都可以在这里找到。
(4)fs。
虚拟文件系统(VFS,Virtual File System)的代码,和各个不同文件系统的代码都在这个目录中。Linux支持的所有文件系统在fs目录下面都有一个对应的子目录。比如ext2文件系统对应的是fs/ext2目录。
一个文件系统是存储设备和需要访问存储设备的进程之间的媒介。存储设备可能是本地的物理上可访问的,比如硬盘或CD-ROM驱动器,它们分别使用ext2/ext3和isofs文件系统;也可能是通过网络访问的,使用NFS文件系统。
还有一些虚拟文件系统,比如proc,它以一个标准文件系统出现,然而,它其中的文件只存在于内存中,并不占用磁盘空间。
(5)include。
这个目录包含了内核中大部分的头文件,它们按照下面的子目录进行分组。
include/asm-*/,这样的子目录有多个,每一个都对应着一个arch的子目录,比如include/asm-alpha、 include/asm-arm、include/asm-i386等。每个子目录中的文件都定义了支持给定体系结构所必须的预处理器宏和内联函数,这些 内联函数多数都是全部或部分使用汇编语言实现的。
编译内核时,系统会建立一个从include/asm目录到目标体系结构特有的目录的符号链接。比如对于arm平台,就是include/asm-arm到include/asm的符号链接。因此,体系结构无关部分的内核代码可以使用如下形式包含体系相关部分的头文件。
#include
include/linux/,与平台无关的头文件都在这个目录下面,它通常会被链接到目录/usr/include/linux(或者它里面的所有文件会被复制到/usr/include/linux目录下面)。因此用户应用程序里和内核代码里的语句:
#include
包含的头文件的内容是一致的。
include目录下的其他子目录,在此不做赘述。
(6)init。
内核的初始化代码。包括main.c、创建早期用户空间的代码以及其他初始化代码。
(7)ipc。
IPC,即进程间通信(interprocess communication)。它包含了共享内存、信号量以及其他形式IPC的代码。
(8)kernel。
内核中最核心的部分,包括进程的调度(kernel/sched.c),以及进程的创建和撤销(kernel/fork.c和kernel/exit.c)等,和平台相关的另外一部分核心的代码在arch/*/kernel目录。
(9)lib。
库代码,实现了一个标准C库的通用子集,包括字符串和内存操作的函数(strlen、mmcpy和其他类似的函数)以及有关sprintf和atoi的系列函数。与arch/lib下的代码不同,这里的库代码都是使用C编写的,在内核新的移植版本中可以直接使用。
(10)mm。
包含了体系结构无关部分的内存管理代码,体系相关的部分位于arch/*/mm目录下。
(11)net。
网络相关代码,实现了各种常见的网络协议,如TCP/IP、IPX等。
(12)scripts。
该目录下没有内核代码,只包含了用来配置内核的脚本文件。当运行make menuconfig或者make xconfig之类的命令配置内核时,用户就是和位于这个目录下的脚本进行交互的。
(13)block。
block层的实现。最初block层的代码一部分位于drivers目录,一部分位于fs目录,从2.6.15开始,block层的核心代码被提取出来放在了顶层的block目录。
(14)crypto。
内核本身所用的加密API,实现了常用的加密和散列算法,还有一些压缩和CRC校验算法。
(15)security。
这个目录包括了不同的Linux安全模型的代码,比如NSA Security-Enhanced Linux。
(16)sound。
声卡驱动以及其他声音相关的代码。
(17)usr。
实现了用于打包和压缩的的cpio等。
————————————————————
system.h 文件 不是ubuntu的 不过可以参考一下。复制粘贴可用不用编译。
#ifndef _ASM_X86_SYSTEM_H
#define _ASM_X86_SYSTEM_H
#include
#include
#include
#include
#include
#include
#include
/* entries in ARCH_DLINFO: */
#if defined(CONFIG_IA32_EMULATION) || !defined(CONFIG_X86_64)
# define AT_VECTOR_SIZE_ARCH 2
#else /* else it's non-compat x86-64 */
# define AT_VECTOR_SIZE_ARCH 1
#endif
struct task_struct; /* one of the stranger aspects of C forward declarations */
struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next);
struct tss_struct;
void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
struct tss_struct *tss);
extern void show_regs_common(void);
#ifdef CONFIG_X86_32
#ifdef CONFIG_CC_STACKPROTECTOR
#define __switch_canary \
"movl %P[task_canary](%[next]), %%ebx\n\t" \
"movl %%ebx, "__percpu_arg([stack_canary])"\n\t"
#define __switch_canary_oparam \
, [stack_canary] "=m" (stack_canary.canary)
#define __switch_canary_iparam \
, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
#else /* CC_STACKPROTECTOR */
#define __switch_canary
#define __switch_canary_oparam
#define __switch_canary_iparam
#endif /* CC_STACKPROTECTOR */
/*
* Saving eflags is important. It switches not only IOPL between tasks,
* it also protects other tasks from NT leaking through sysenter etc.
*/
#define switch_to(prev, next, last) \
do { \
/* \
* Context-switching clobbers all registers, so we clobber \
* them explicitly, via unused output variables. \
* (EAX and EBP is not listed because EBP is saved/restored \
* explicitly for wchan access and EAX is the return value of \
* __switch_to()) \
*/ \
unsigned long ebx, ecx, edx, esi, edi; \
\
asm volatile("pushfl\n\t" /* save flags */ \
"pushl %%ebp\n\t" /* save EBP */ \
"movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
"movl %[next_sp],%%esp\n\t" /* restore ESP */ \
"movl $1f,%[prev_ip]\n\t" /* save EIP */ \
"pushl %[next_ip]\n\t" /* restore EIP */ \
__switch_canary \
"jmp __switch_to\n" /* regparm call */ \
"1:\t" \
"popl %%ebp\n\t" /* restore EBP */ \
"popfl\n" /* restore flags */ \
\
/* output parameters */ \
: [prev_sp] "=m" (prev->thread.sp), \
[prev_ip] "=m" (prev->thread.ip), \
"=a" (last), \
\
/* clobbered output registers: */ \
"=b" (ebx), "=c" (ecx), "=d" (edx), \
"=S" (esi), "=D" (edi) \
\
__switch_canary_oparam \
\
/* input parameters: */ \
: [next_sp] "m" (next->thread.sp), \
[next_ip] "m" (next->thread.ip), \
\
/* regparm parameters for __switch_to(): */ \
[prev] "a" (prev), \
[next] "d" (next) \
\
__switch_canary_iparam \
\
: /* reloaded segment registers */ \
"memory"); \
} while (0)
/*
* disable hlt during certain critical i/o operations
*/
#define HAVE_DISABLE_HLT
#else
#define __SAVE(reg, offset) "movq %%" #reg ",(14-" #offset ")*8(%%rsp)\n\t"
#define __RESTORE(reg, offset) "movq (14-" #offset ")*8(%%rsp),%%" #reg "\n\t"
/* frame pointer must be last for get_wchan */
#define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
#define RESTORE_CONTEXT "movq %%rbp,%%rsi ; popq %%rbp ; popf\t"
#define __EXTRA_CLOBBER \
, "rcx", "rbx", "rdx", "r8", "r9", "r10", "r11", \
"r12", "r13", "r14", "r15"
#ifdef CONFIG_CC_STACKPROTECTOR
#define __switch_canary \
"movq %P[task_canary](%%rsi),%%r8\n\t" \
"movq %%r8,"__percpu_arg([gs_canary])"\n\t"
#define __switch_canary_oparam \
, [gs_canary] "=m" (irq_stack_union.stack_canary)
#define __switch_canary_iparam \
, [task_canary] "i" (offsetof(struct task_struct, stack_canary))
#else /* CC_STACKPROTECTOR */
#define __switch_canary
#define __switch_canary_oparam
#define __switch_canary_iparam
#endif /* CC_STACKPROTECTOR */
/* Save restore flags to clear handle leaking NT */
#define switch_to(prev, next, last) \
asm volatile(SAVE_CONTEXT \
"movq %%rsp,%P[threadrsp](%[prev])\n\t" /* save RSP */ \
"movq %P[threadrsp](%[next]),%%rsp\n\t" /* restore RSP */ \
"call __switch_to\n\t" \
"movq "__percpu_arg([current_task])",%%rsi\n\t" \
__switch_canary \
"movq %P[thread_info](%%rsi),%%r8\n\t" \
"movq %%rax,%%rdi\n\t" \
"testl %[_tif_fork],%P[ti_flags](%%r8)\n\t" \
"jnz ret_from_fork\n\t" \
RESTORE_CONTEXT \
: "=a" (last) \
__switch_canary_oparam \
: [next] "S" (next), [prev] "D" (prev), \
[threadrsp] "i" (offsetof(struct task_struct, thread.sp)), \
[ti_flags] "i" (offsetof(struct thread_info, flags)), \
[_tif_fork] "i" (_TIF_FORK), \
[thread_info] "i" (offsetof(struct task_struct, stack)), \
[current_task] "m" (current_task) \
__switch_canary_iparam \
: "memory", "cc" __EXTRA_CLOBBER)
#endif
#ifdef __KERNEL__
extern void native_load_gs_index(unsigned);
/*
* Load a segment. Fall back on loading the zero
* segment if something goes wrong..
*/
#define loadsegment(seg, value) \
do { \
unsigned short __val = (value); \
\
asm volatile(" \n" \
"1: movl %k0,%%" #seg " \n" \
\
".section .fixup,\"ax\" \n" \
"2: xorl %k0,%k0 \n" \
" jmp 1b \n" \
".previous \n" \
\
_ASM_EXTABLE(1b, 2b) \
\
: "+r" (__val) : : "memory"); \
} while (0)
/*
* Save a segment register away
*/
#define savesegment(seg, value) \
asm("mov %%" #seg ",%0":"=r" (value) : : "memory")
/*
* x86_32 user gs accessors.
*/
#ifdef CONFIG_X86_32
#ifdef CONFIG_X86_32_LAZY_GS
#define get_user_gs(regs) (u16)({unsigned long v; savesegment(gs, v); v;})
#define set_user_gs(regs, v) loadsegment(gs, (unsigned long)(v))
#define task_user_gs(tsk) ((tsk)->thread.gs)
#define lazy_save_gs(v) savesegment(gs, (v))
#define lazy_load_gs(v) loadsegment(gs, (v))
#else /* X86_32_LAZY_GS */
#define get_user_gs(regs) (u16)((regs)->gs)
#define set_user_gs(regs, v) do { (regs)->gs = (v); } while (0)
#define task_user_gs(tsk) (task_pt_regs(tsk)->gs)
#define lazy_save_gs(v) do { } while (0)
#define lazy_load_gs(v) do { } while (0)
#endif /* X86_32_LAZY_GS */
#endif /* X86_32 */
static inline unsigned long get_limit(unsigned long segment)
{
unsigned long __limit;
asm("lsll %1,%0" : "=r" (__limit) : "r" (segment));
return __limit + 1;
}
static inline void native_clts(void)
{
asm volatile("clts");
}
/*
* Volatile isn't enough to prevent the compiler from reordering the
* read/write functions for the control registers and messing everything up.
* A memory clobber would solve the problem, but would prevent reordering of
* all loads stores around it, which can hurt performance. Solution is to
* use a variable and mimic reads and writes to it to enforce serialization
*/
static unsigned long __force_order;
static inline unsigned long native_read_cr0(void)
{
unsigned long val;
asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
return val;
}
static inline void native_write_cr0(unsigned long val)
{
asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
}
static inline unsigned long native_read_cr2(void)
{
unsigned long val;
asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
return val;
}
static inline void native_write_cr2(unsigned long val)
{
asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
}
static inline unsigned long native_read_cr3(void)
{
unsigned long val;
asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
return val;
}
static inline void native_write_cr3(unsigned long val)
{
asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
}
static inline unsigned long native_read_cr4(void)
{
unsigned long val;
asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
return val;
}
static inline unsigned long native_read_cr4_safe(void)
{
unsigned long val;
/* This could fault if %cr4 does not exist. In x86_64, a cr4 always
* exists, so it will never fail. */
#ifdef CONFIG_X86_32
asm volatile("1: mov %%cr4, %0\n"
"2:\n"
_ASM_EXTABLE(1b, 2b)
: "=r" (val), "=m" (__force_order) : "0" (0));
#else
val = native_read_cr4();
#endif
return val;
}
static inline void native_write_cr4(unsigned long val)
{
asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order));
}
#ifdef CONFIG_X86_64
static inline unsigned long native_read_cr8(void)
{
unsigned long cr8;
asm volatile("movq %%cr8,%0" : "=r" (cr8));
return cr8;
}
static inline void native_write_cr8(unsigned long val)
{
asm volatile("movq %0,%%cr8" :: "r" (val) : "memory");
}
#endif
static inline void native_wbinvd(void)
{
asm volatile("wbinvd": : :"memory");
}
#ifdef CONFIG_PARAVIRT
#include
#else
#define read_cr0() (native_read_cr0())
#define write_cr0(x) (native_write_cr0(x))
#define read_cr2() (native_read_cr2())
#define write_cr2(x) (native_write_cr2(x))
#define read_cr3() (native_read_cr3())
#define write_cr3(x) (native_write_cr3(x))
#define read_cr4() (native_read_cr4())
#define read_cr4_safe() (native_read_cr4_safe())
#define write_cr4(x) (native_write_cr4(x))
#define wbinvd() (native_wbinvd())
#ifdef CONFIG_X86_64
#define read_cr8() (native_read_cr8())
#define write_cr8(x) (native_write_cr8(x))
#define load_gs_index native_load_gs_index
#endif
/* Clear the 'TS' bit */
#define clts() (native_clts())
#endif/* CONFIG_PARAVIRT */
#define stts() write_cr0(read_cr0() | X86_CR0_TS)
#endif /* __KERNEL__ */
static inline void clflush(volatile void *__p)
{
asm volatile("clflush %0" : "+m" (*(volatile char __force *)__p));
}
#define nop() asm volatile ("nop")
void disable_hlt(void);
void enable_hlt(void);
void cpu_idle_wait(void);
extern unsigned long arch_align_stack(unsigned long sp);
extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
void default_idle(void);
void stop_this_cpu(void *dummy);
/*
* Force strict CPU ordering.
* And yes, this is required on UP too when we're talking
* to devices.
*/
#ifdef CONFIG_X86_32
/*
* Some non-Intel clones support out of order store. wmb() ceases to be a
* nop for these.
*/
#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
#define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
#else
#define mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory")
#define wmb() asm volatile("sfence" ::: "memory")
#endif
/**
* read_barrier_depends - Flush all pending reads that subsequents reads
* depend on.
*
* No data-dependent reads from memory-like regions are ever reordered
* over this barrier. All reads preceding this primitive are guaranteed
* to access memory (but not necessarily other CPUs' caches) before any
* reads following this primitive that depend on the data return by
* any of the preceding reads. This primitive is much lighter weight than
* rmb() on most CPUs, and is never heavier weight than is
* rmb().
*
* These ordering constraints are respected by both the local CPU
* and the compiler.
*
* Ordering is not guaranteed by anything other than these primitives,
* not even by data dependencies. See the documentation for
* memory_barrier() for examples and URLs to more information.
*
* For example, the following code would force ordering (the initial
* value of "a" is zero, "b" is one, and "p" is "&a"):
*
*
* CPU 0 CPU 1
*
* b = 2;
* memory_barrier();
* p = &b; q = p;
* read_barrier_depends();
* d = *q;
*
*
* because the read of "*q" depends on the read of "p" and these
* two reads are separated by a read_barrier_depends(). However,
* the following code, with the same initial values for "a" and "b":
*
*
* CPU 0 CPU 1
*
* a = 2;
* memory_barrier();
* b = 3; y = b;
* read_barrier_depends();
* x = a;
*
*
* does not enforce ordering, since there is no data dependency between
* the read of "a" and the read of "b". Therefore, on some CPUs, such
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
* in cases like this where there are no data dependencies.
**/
#define read_barrier_depends() do { } while (0)
#ifdef CONFIG_SMP
#define smp_mb() mb()
#ifdef CONFIG_X86_PPRO_FENCE
# define smp_rmb() rmb()
#else
# define smp_rmb() barrier()
#endif
#ifdef CONFIG_X86_OOSTORE
# define smp_wmb() wmb()
#else
# define smp_wmb() barrier()
#endif
#define smp_read_barrier_depends() read_barrier_depends()
#define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
#else
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while (0)
#define set_mb(var, value) do { var = value; barrier(); } while (0)
#endif
/*
* Stop RDTSC speculation. This is needed when you need to use RDTSC
* (or get_cycles or vread that possibly accesses the TSC) in a defined
* code region.
*
* (Could use an alternative three way for this if there was one.)
*/
static __always_inline void rdtsc_barrier(void)
{
alternative(ASM_NOP3, "mfence", X86_FEATURE_MFENCE_RDTSC);
alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
}
#endif /* _ASM_X86_SYSTEM_H */