没事刷刷题:Word Ladder I & Word Ladder II

题目:链接

Given two words (start and end), and a dictionary, find the length of shortest transformation sequence from start to end, such that:

  1. Only one letter can be changed at a time
  2. Each intermediate word must exist in the dictionary

For example,

Given:
start ="hit"
end ="cog"
dict =["hot","dot","dog","lot","log"]

As one shortest transformation is"hit" -> "hot" -> "dot" -> "dog" -> "cog",
return its length5.

Note:

    • Return 0 if there is no such transformation sequence.
    • All words have the same length.
    • All words contain only lowercase alphabetic characters.

 

解题思路:

显然是搜索问题。

首先,对于start根节点,我们可以逐位枚举26个字母,在字典中搜索其可能的下一步路径。对于已经搜索过的单词,我们期望后续不再处理,这样可以简化搜索空间。因为本题是求最短步长,所以利用广度优先的话,对于已经遍历过的单词,我们可以直接从字典中删除,这样后续即使有可能transformate到它,步长也肯定大于等于之前遍历到它的叔叔or兄弟节点,肯定不是最短步长,可以不考虑该种情况。而用深度优先的话,不能随便删除,举个例子,起点abc的儿子abe和abd,abe先入栈,经过搜索路径abe-aye-tye-tyd-xyd-xyz到达终点xyz,abe的子孙搜索结束之后,沿途经过的节点被删除;搜索C的子孙时,路径abd-xbd-xyd-xyz将不会被搜索到,得出的结果不正确。

同时用curr_min_steps变量记录当前已算出的最小步长,perhaps_min_steps变量为不考虑字典内容时理论上的最小值,对结果进行一定的剪枝。

class Solution {
public:
    int ladderLength(string start, string end, unordered_set &dict) {
        //一些特殊情况
        if(start == end)
            return 1;
        if(dict.empty())
            return 0;

        int perhaps_min_steps = 1, curr_min_steps = 10000;
        for(int i = 0; i < start.size(); i++){
            if(start[i] != end[i]) perhaps_min_steps += 1;
        }
        if(perhaps_min_steps == 2)
            return 2;

        queue > path;
        path.push(make_pair(start, 1));

        while(!path.empty())
        {
            pair top = path.front();
            path.pop();
            if(top.second >= curr_min_steps) continue;
            string pre = "", post = top.first;
            for(int i = 0; i < top.first.size(); i++){
                pre = top.first.substr(0, i);
                post = post.substr(1);
                for(char c = 'a'; c <= 'z'; c++){
                    string target = pre + c + post;
                    if(target == end){ //找到了一种
                        if(top.second + 1 < curr_min_steps) //小于当前最短路径
                            curr_min_steps = top.second + 1;
                        if(curr_min_steps == perhaps_min_steps){
                            return curr_min_steps; //理论上的最小值
                        } else {
                            break;
                        }
                    }
                    else{ //中间结果
                        if(dict.find(target) != dict.end()){
                            if(top.second+1 > curr_min_steps){
                                continue;
                            } else {
                                path.push(make_pair(target, top.second+1));
                                dict.erase(target);
                            }
                        } else {
                            continue;
                        }
                    }
                }
            }
        }
        if(curr_min_steps == 10000)
            return 0;
        return curr_min_steps;
    }        
};

 

进阶版:求所有可能的最短路径

题目:链接

Given two words (start and end), and a dictionary, find all shortest transformation sequence(s) from start to end, such that:

  1. Only one letter can be changed at a time
  2. Each intermediate word must exist in the dictionary

For example,

Given:
start ="hit"
end ="cog"
dict =["hot","dot","dog","lot","log"]

Return

  [
    ["hit","hot","dot","dog","cog"],
    ["hit","hot","lot","log","cog"]
  ]

Note:

  • All words have the same length.
  • All words contain only lowercase alphabetic characters.

解题思路:

延续上题思路,原来的pair中存储的是词与其祖先的步长,现在修改为祖先的vector即可,不可在push之后就从字典删除节点,因为可能有多条路径共享节点,如red-ted-tex-tax,red-rex-tex-tax。这样修改之后测试,系统提示内存溢出。

确实这样存储vector是比较浪费空间的,我们还需要进行剪枝与删除。

a. 删除:起初先从以下角度进行删除:遍历到某节点时,它的父亲节点可从字典中删除。这样之后报了段错误,提示堆栈溢出。虽然这个提示毫无卵用,但是还是尝试优化了一下:在遍历到第N层节点时,第N-1层的节点均可从字典中删除。

b. 剪枝:记录一些无路可走的节点

其实还有一些优化思路没有实现,比如兄弟节点的祖先路径是相同的,可以考虑邻接存储,减少存储空间。

断断续续做了几个小时,最终提交的准确率只有47%,但是看了一下错误用例,结果是相同的,只是vector的顺序不同,懒得搞了,源码附上。

    vector > findLadders(string start, string end, unordered_set &dict) {
        vector > result;

        if(start == end || dict.empty())
            return result;

        int curr_min_steps = 10000;  
        unordered_set badwords;

        queue > > path;
        vector prepath;
        path.push(make_pair(start, prepath));

        int level = 0;
        vector uncles;
        while(!path.empty())
        {
            pair > top = path.front();
            path.pop();
            
            if(top.second.size() == level){ //新的一层第一个节点
                level += 1;
                for(int i = 0; i < uncles.size(); i++){
                    dict.erase(uncles[i]);
                }
                uncles.clear();
            }           

            string pre = "", post = top.first;
            int prelength = top.second.size();
            if(prelength > curr_min_steps){
                break;
            }
            for(int i = 0; i < top.first.size(); i++){
                pre = top.first.substr(0, i);
                post = post.substr(1);
                for(char c = 'a'; c <= 'z'; c++){
                    string target = pre + c + post;
                    if(target == end){ //找到了一种
                        if(prelength <= curr_min_steps){ //小于当前最短路径
                            curr_min_steps = prelength;
                            vector newpath(top.second);
                            newpath.push_back(top.first);
                            newpath.push_back(end);
                            result.push_back(newpath);
                            break;
                        } else {
                            continue;
                        }
                    }
                    else{ //中间结果
                        if(dict.find(target) != dict.end() 
                            &&  find(top.second.begin(), top.second.end(), target) == top.second.end())
                        {
                            if(prelength > curr_min_steps){
                                continue;
                            } else {
                                vector newpath(top.second);
                                newpath.push_back(top.first);
                                path.push(make_pair(target, newpath));
                                uncles.push_back(target);
                            }
                        } else {
                            continue;
                        }
                    }
                }
            }
        }

        return result;
    }

 

提示的错误用例:

测试用例:
"qa","sq",["si","go","se","cm","so","ph","mt","db","mb","sb","kr","ln","tm","le","av","sm","ar","ci","ca","br","ti","ba","to","ra","fa","yo","ow","sn","ya","cr","po","fe","ho","ma","re","or","rn","au","ur","rh","sr","tc","lt","lo","as","fr","nb","yb","if","pb","ge","th","pm","rb","sh","co","ga","li","ha","hz","no","bi","di","hi","qa","pi","os","uh","wm","an","me","mo","na","la","st","er","sc","ne","mn","mi","am","ex","pt","io","be","fm","ta","tb","ni","mr","pa","he","lr","sq","ye"]
对应输出应该为:
[["qa","ba","be","se","sq"],["qa","ba","bi","si","sq"],["qa","ba","br","sr","sq"],["qa","ca","cm","sm","sq"],["qa","ca","co","so","sq"],["qa","la","ln","sn","sq"],["qa","la","lt","st","sq"],["qa","ma","mb","sb","sq"],["qa","pa","ph","sh","sq"],["qa","ta","tc","sc","sq"],["qa","fa","fe","se","sq"],["qa","ga","ge","se","sq"],["qa","ha","he","se","sq"],["qa","la","le","se","sq"],["qa","ma","me","se","sq"],["qa","na","ne","se","sq"],["qa","ra","re","se","sq"],["qa","ya","ye","se","sq"],["qa","ca","ci","si","sq"],["qa","ha","hi","si","sq"],["qa","la","li","si","sq"],["qa","ma","mi","si","sq"],["qa","na","ni","si","sq"],["qa","pa","pi","si","sq"],["qa","ta","ti","si","sq"],["qa","ca","cr","sr","sq"],["qa","fa","fr","sr","sq"],["qa","la","lr","sr","sq"],["qa","ma","mr","sr","sq"],["qa","fa","fm","sm","sq"],["qa","pa","pm","sm","sq"],["qa","ta","tm","sm","sq"],["qa","ga","go","so","sq"],["qa","ha","ho","so","sq"],["qa","la","lo","so","sq"],["qa","ma","mo","so","sq"],["qa","na","no","so","sq"],["qa","pa","po","so","sq"],["qa","ta","to","so","sq"],["qa","ya","yo","so","sq"],["qa","ma","mn","sn","sq"],["qa","ra","rn","sn","sq"],["qa","ma","mt","st","sq"],["qa","pa","pt","st","sq"],["qa","na","nb","sb","sq"],["qa","pa","pb","sb","sq"],["qa","ra","rb","sb","sq"],["qa","ta","tb","sb","sq"],["qa","ya","yb","sb","sq"],["qa","ra","rh","sh","sq"],["qa","ta","th","sh","sq"]] 
你的输出为:
[["qa","ba","be","se","sq"],["qa","ba","bi","si","sq"],["qa","ba","br","sr","sq"],["qa","ca","ci","si","sq"],["qa","ca","cm","sm","sq"],["qa","ca","co","so","sq"],["qa","ca","cr","sr","sq"],["qa","fa","fe","se","sq"],["qa","fa","fm","sm","sq"],["qa","fa","fr","sr","sq"],["qa","ga","ge","se","sq"],["qa","ga","go","so","sq"],["qa","ha","he","se","sq"],["qa","ha","hi","si","sq"],["qa","ha","ho","so","sq"],["qa","la","le","se","sq"],["qa","la","li","si","sq"],["qa","la","ln","sn","sq"],["qa","la","lo","so","sq"],["qa","la","lr","sr","sq"],["qa","la","lt","st","sq"],["qa","ma","mb","sb","sq"],["qa","ma","me","se","sq"],["qa","ma","mi","si","sq"],["qa","ma","mn","sn","sq"],["qa","ma","mo","so","sq"],["qa","ma","mr","sr","sq"],["qa","ma","mt","st","sq"],["qa","na","nb","sb","sq"],["qa","na","ne","se","sq"],["qa","na","ni","si","sq"],["qa","na","no","so","sq"],["qa","pa","pb","sb","sq"],["qa","pa","ph","sh","sq"],["qa","pa","pi","si","sq"],["qa","pa","pm","sm","sq"],["qa","pa","po","so","sq"],["qa","pa","pt","st","sq"],["qa","ra","rb","sb","sq"],["qa","ra","re","se","sq"],["qa","ra","rh","sh","sq"],["qa","ra","rn","sn","sq"],["qa","ta","tb","sb","sq"],["qa","ta","tc","sc","sq"],["qa","ta","th","sh","sq"],["qa","ta","ti","si","sq"],["qa","ta","tm","sm","sq"],["qa","ta","to","so","sq"],["qa","ya","yb","sb","sq"],["qa","ya","ye","se","sq"],["qa","ya","yo","so","sq"]]

  

转载于:https://www.cnblogs.com/wilde/p/8385220.html

你可能感兴趣的:(没事刷刷题:Word Ladder I & Word Ladder II)