动态网站推广与搜索引擎优化初探

最近接触动态网站推广问题,查阅资料后发现:难!搜索引擎的蜘蛛爬虫(robot)MS主要通过目录浏览方式进行搜集,而动态网站页面很少,绝大部分都是动态生成的,要被搜索引擎收录比较困难。目前大致以下几种方式进行推广:一是在各大搜索引擎提交自己的网站;二是通过各种大大小小的分类目录;三是做链接交换和邮件推广;四是自己生成网站地图和robot文件(还有其它很多五花八门的手段),本文要描述的是第四种方式。

       第一个是动态网站的各个链接生成网站地图(据报道google,microsoft和yahoo联合声明一个统一的标准sitemap 0.9,事实上目前只有google一家可以提交网站地图, 参见:http://www.google.com/support/webmasters/bin/answer.py?answer=40318&hl=zh_CN),站点地图范本如下:

<?xml version="1.0" encoding="UTF-8"?>
    
< urlset xmlns="http://www.google.com/schemas/sitemap/0.84">
    
< url>
        
< loc>http://www.example.com/</loc>
        
< lastmod>2005-01-01</lastmod>
        
< changefreq>monthly</changefreq>
        
< priority>0.8</priority>
    
</url>    
    
</urlset>

       我的做法是用一张表记录点击的链接,再写一个页面来生成网站地图(存储地图文件的目录需要目录写授权)文件Sitemap.xml,代码如下: 

<!---->          /**/ /*
         * 生成网站地图Sitemap.xml
         * sid:网站代码
        
*/

        
private void CreateXMLFile( string sid)
        
... {
                SqlParameter param1
= new SqlParameter("@SID", SqlDbType.VarChar, 20);
                param1.Value
= sid;
                IDataParameter[] parameters
= new IDataParameter[] ...{ param1 };
                DbHelperSQL dbHelper
= new DbHelperSQL(connStr);
                
string outParams = "";
                DataSet ds
= dbHelper.RunProcedure("spGetSiteMap", parameters, "TmpSiteMapInfo", ref outParams);
                
if (ds.Tables[0].Rows.Count > 0)
                
...{
                        
string XMLSpace = "http://www.google.com/schemas/sitemap/0.9";
                        DateTime dt
= System.DateTime.Now;

                        XmlText xmltext;
                        XmlElement xmlelem;
                        
// Create a new, empty document.
                        XmlDocument doc = new XmlDocument();
                        XmlDeclaration docNode
= doc.CreateXmlDeclaration("1.0", "UTF-8", null);
                        doc.AppendChild(docNode);

                        
// Create and insert a new element.
                        XmlNode urlset = doc.CreateNode(XmlNodeType.Element, "urlset", XMLSpace);
                        doc.AppendChild(urlset);

                        
foreach (DataRow dr in ds.Tables[0].Rows)
                        
...{
                                
// Create a nested element (with an attribute).
                                XmlElement url = doc.CreateElement("", "url", XMLSpace);
                                urlset.AppendChild(url);

                                xmlelem
= doc.CreateElement("", "loc", XMLSpace);
                                xmltext
= doc.CreateTextNode(dr["URL"].ToString());
                                xmlelem.AppendChild(xmltext);
                                url.AppendChild(xmlelem);

                                xmlelem
= doc.CreateElement("", "lastmod", XMLSpace);
                                xmltext
= doc.CreateTextNode(string.Format("{0:u}", dt).Substring(0,10));
                                xmlelem.AppendChild(xmltext);
                                url.AppendChild(xmlelem);

                                xmlelem
= doc.CreateElement("", "changefreq", XMLSpace);
                                
if (dr["Type"].ToString() == "1")
                                        xmltext
= doc.CreateTextNode("daily");
                                
else
                                        xmltext
= doc.CreateTextNode("monthly");
                                xmlelem.AppendChild(xmltext);
                                url.AppendChild(xmlelem);

                                xmlelem
= doc.CreateElement("", "priority", XMLSpace);
                                xmltext
= doc.CreateTextNode(dr["OrderNo"].ToString());
                                xmlelem.AppendChild(xmltext);
                                url.AppendChild(xmlelem);
                        }


                        doc.Save(Server.MapPath(
"Sitemap.xml"));
                }

                
return;
        }

       上面的代码比较简单,一个要点是如果上面结点(比如:urlset)带有命名空间,则下层结点也一定要带,否则下层结点会自动带一个空的命名空间(好像与习惯思维相反,这点花了偶不少时间)。

       第二个是robots(爬虫用的配置文件),也有相关标准,网上资料很多,下面是我写的生成robots文件的代码: 

 

<!---->          private void CreateRobotFile( string sid)
        
... {
                SqlParameter param1
= new SqlParameter("@SID", SqlDbType.VarChar, 20);
                param1.Value
= sid;
                IDataParameter[] parameters
= new IDataParameter[] ...{ param1 };
                DbHelperSQL dbHelper
= new DbHelperSQL(connStr);
                
string outParams = "";
                DataSet ds
= dbHelper.RunProcedure("spGetSiteMap", parameters, "TmpSiteMapInfo", ref outParams);
                
if (ds.Tables[0].Rows.Count > 0)
                
...{
                        FileStream fs
= new FileStream(Server.MapPath("robots.txt"), FileMode.OpenOrCreate, FileAccess.Write);
                        StreamWriter m_streamWriter
= new StreamWriter(fs);
                        m_streamWriter.Flush();
                        
//    使用StreamWriter来往文件中写入内容
                        m_streamWriter.BaseStream.Seek(0, SeekOrigin.Begin);
                        
//    把richTextBox1中的内容写入文件
                        m_streamWriter.WriteLine("# Robots.txt file from http://www.hugesoft.net");
                        m_streamWriter.WriteLine(
"# All robots will spider the domain");
                        m_streamWriter.WriteLine(
"");
                        m_streamWriter.WriteLine(
"Sitemap: http://www.hugesoft.net/Sitemap.xml");
                        m_streamWriter.WriteLine(
"User-agent: *");
                        m_streamWriter.WriteLine(
"Disallow: ");
                        
foreach (DataRow dr in ds.Tables[0].Rows)
                        
...{
                                
string str = dr["URL"].ToString().ToLower();
                                
int index = str.IndexOf("http://");
                                
if (index < 0)
                                        
continue;
                                index
= str.IndexOf("/",index + 7);
                                
if (index < 0)
                                        
continue;
                                str
= str.Substring(index);
                                m_streamWriter.WriteLine(
"Allow: " + str);
                        }

                        
//关闭此文件
                        m_streamWriter.Flush();
                        m_streamWriter.Close();
                }

        }

 
       由于我记录的是绝对URL,因此生成robots.txt时需要解析URL,去掉域名部分。别某些标记(比如:Sitemap,Allow)不一定能被所有类型的robot支持。
       robots.txt文件一定要放在网站根目录下,Sitemap.xml可以提交给google(目前只此一家)。
 

你可能感兴趣的:(.net,xml,搜索引擎,Google,Microsoft)