XRAY最新1.9.11完美逆向附送联动脚本
2023-6-2 22:57:8 Author: 蓝极战队(查看原文) 阅读量:277 收藏

其实在刚刚更新的当天,就已经PJ了,只是最近很多朋友需要就发出来,只PJ了WIN64和LINUX AMD64,因为我只常用这两个系统。

官方最新版1.9.11

还是老套路的验证,不用去纠结license的算法,用一个过期的license来进行验证,只需将要关键点test之后的jne汇编成je即可,也就是license过期了反而是合法的。amd64需要汇编两个地方的jne(以前我记得只改的一个jne啊~~~~)。

已经打包上传了,解压密码如下:

另外说一下,作为一名资深偏执狂老程序员,我认真分析过长亭XRAY的golang源码,无论从各方面来说,我觉得真的开发得甚和朕意。如果我来写也不过如此(装逼)~~~~

另送一个我常用联动脚本,用crawlergo爬取,然后扔给xray扫描。

0x01 对于批量的标靶,一般为先fofa等检索一遍,然后用一个小脚本筛选出能够访问的资产。

脚本如下:

import requests
def foo(): for url in open("url.txt"): url = url.strip() if 'http' in url or 'https' in url: url1 = url url2 = None else: url1 = f'http://{url}' url2 = f'https://{url}' try: ok = requests.get(url1, timeout=(5, 8)) if ok.status_code == 200: print(url1, ok.status_code) with open("./url_ok.txt", 'a+') as url_ok: url_ok.write(url1 + "\n") url_ok.close() else: ok_1 = requests.get(url2, timeout=(5, 8)) if ok_1.status_code == 200: print(url2, ok_1.status_code) with open("./url_ok.txt", 'a+') as url_ok: url_ok.write(url2 + "\n") url_ok.close() else: print(url2, ok.status_code) except: try: ok2 = requests.get(url2, timeout=(5, 8)) if ok2.status_code == 200: print(url2, ok2.status_code) with open("./url_ok.txt", 'a+') as url_ok: url_ok.write(url1 + "\n") url_ok.close() else: print(url2, ok2.status_code) except: print(f"{url2} URL无效")
if __name__ == "__main__": foo()

0x02 XRAY开被动监听

/xray webscan --listen 127.0.0.1:7777 --html-output xxx.html

如果是VPS上跑,可以加后台,结果输出到out.log,错误输出到err.log

nohup ./xray webscan --listen 127.0.0.1:7777 --html-output xxx.html > out.log 2>err.log &

0x03 直接跑脚本

将筛选出来的有效标靶放在targets.txt里面,一行一个,然后直接运行脚本。

import queueimport simplejsonimport threadingimport subprocessimport requestsimport warningswarnings.filterwarnings(action='ignore')
urls_queue = queue.Queue()tclose=0
def opt2File(paths): try: f = open('crawl_result.txt','a') f.write(paths + '\n') finally: f.close()
def opt2File2(subdomains): try: f = open('sub_domains.txt','a') f.write(subdomains + '\n') finally: f.close()


def request0(): while tclose==0 or urls_queue.empty() == False: if(urls_queue.qsize()==0): continue print(urls_queue.qsize()) req =urls_queue.get() proxies = { 'http': 'http://127.0.0.1:7777', 'https': 'http://127.0.0.1:7777', } urls0 =req['url'] headers0 =req['headers'] method0=req['method'] data0=req['data'] try: if(method0=='GET'): a = requests.get(urls0, headers=headers0, proxies=proxies,timeout=30,verify=False) opt2File(urls0) elif(method0=='POST'): a = requests.post(urls0, headers=headers0,data=data0, proxies=proxies,timeout=30,verify=False) opt2File(urls0) except: continue return
def main(data1): target = data1 cmd = ["./crawlergo", "-c", "/usr/bin/google-chrome","-t", "20","-f","smart","--fuzz-path", "--output-mode", "json", target] rsp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) output, error = rsp.communicate() try: result = simplejson.loads(output.decode().split("--[Mission Complete]--")[1]) except: return req_list = result["req_list"] sub_domain = result["sub_domain_list"] print(data1) print("[crawl ok]") try: for subd in sub_domain: opt2File2(subd) except: pass try: for req in req_list: urls_queue.put(req) except: return print("[scanning]")


if __name__ == '__main__': file = open("targets.txt") t = threading.Thread(target=request0) t.start() for text in file.readlines(): data1 = text.strip('\n') main(data1) tclose=1

最后愉快滴坐等报告即可~~~~~~

下载地址:关注本公众号,回复xray即可获取


文章来源: http://mp.weixin.qq.com/s?__biz=MzkwMDMyOTA1OA==&mid=2247484140&idx=1&sn=80ac823167e3686c9ae04982d6502915&chksm=c044f9e1f73370f72817ec29addd7464f767d0ddaa33ed602111a61d21dd1d3d21e99ddf3dcf#rd
如有侵权请联系:admin#unsafe.sh